Interview with Connor Dunlop from the Ada Lovelace Institute: New insights into AI governance

Ada Lovelace Institute is an independent research institute, founded in 2018 and based in London and Brussels. They work to ensure that data and AI work for people and society, and believe that the benefits of data and AI must be justly and equitably distributed, and they must enhance individual and social wellbeing. We spoke with Connor Dunlop about their recent research on AI governance, the result of the UK’s AI Safety Summit and what he’s advocating for in 2024.

Connor is the European Public Policy Lead at the Ada Lovelace Institute. Connor is based in Brussels, and is responsible for leading and delivering Ada’s engagement strategy on the governance and regulation of AI in Europe.

What are you working on these days, after the AI Act deal was recently reached?

The run up to the final trilogue and the days around it were all consuming, so indeed it is nice to have some breathing space to work on our wider research which took a bit of a backseat in the last weeks of 2023. We just published a paper with Merlin Stein on lessons we can learn for AI governance from the oversight mechanisms of complex, novel technologies by life sciences regulators such as the US’ Food and Drug Administration (FDA). Although the FDA has limitations and unique features, it offers valuable insights to follow when creating a regulatory system for AI. There are oversight mechanisms in place that we can learn from, such as ways to reduce information imbalances between regulators and the AI industry. These mechanisms involve both broad and targeted external evaluations, and they cover the entire life cycle of an AI product.

We might also draw ideas on how to enhance democratic legitimacy for AI oversight, as the FDA has patient representatives who support in market approval processes.

Besides that, we have forthcoming publications on algorithmic auditing, foundation model evaluations, the environmental impact of AI, and several comparative governance pieces.

In our community of grantees, you are working alongside social justice and digital rights organisations. How has this collaboration transformed your AI advocacy work?

The work happening across the ecosystem is fantastic and inspiring, and it is an honour to contribute in some small way. We have learned a great deal from these organisations, from advocacy and engagement strategies to specific legislative insights. However, to highlight one key learning, it has been the need to keep a people-centred approach to our work. Given the breakneck speed of development in AI and the new use cases we see every day, it is easy to become overly focused on the technology and to believe that the outcomes are pre-determined. But the community we work with always ensures that we keep agency at the centre and reminds me that the outcomes we see are ultimately based on human choices, from data cleaning practises in the development phase to decisions on bias and fairness testing when deploying an AI system. These choices impact peoples’ lives, and my engagement with our collaborators has been invaluable in understanding these impacts and ensuring we keep this in mind when talking about AI technologies.

It’s been a few months since the UK AI Safety Summit. Has the Summit delivered anything tangible for people and society?

The outcome of the Summit – the Bletchley Declaration – highlighted how AI is already causing harm in many everyday contexts and poses a broad range of risks to society. It concluded that urgent action at national and international levels is needed. This is very welcome, but this momentum needs to be maintained and backed up by real enforcement of laws that already apply to AI – such as equalities and intellectual property – as well as new, enforceable protections. Unfortunately, the latter does not seem to be forthcoming in the UK, and it is hard to offer tangible protections and to offer trust and safety for people and society without meaningful national regulation.

Another outcome of the Summit – the UK’s AI Safety Institute – is also welcome. It will work on model evaluations for foundation models, which is helpful for reducing information asymmetries, but we need to await evidence to see how this will work in practice: for example, the level of access that is granted by developers, how its ‘knowledge-sharing’ function will operate to support existing regulators, and will this enable information diffusion across the wider ecosystem?

What AI governance issues would you like to see at the top of the AI governance agenda in 2024?

A key one is AI liability, and in the EU context the AI Liability Directive (AILD) in particular. It is essential to clearly allocate responsibility to the actors who have control over the impacts of AI, and to ensure meaningful repercussions for harmful outcomes. The Product Liability Directive is a good start, as was the original AILD proposal, but significant gaps remain around immaterial damages and where the burden of proof lies when an individual is harmed. So in 2024 it is essential to re-start work on the AILD and ensure it clarifies scope, particularly in light of changes to the AI Act in 2023.

Another important milestone is the code of practise for general purpose AI models under the AI Act. This looks to be an industry-led voluntary code, which history shows is unlikely to offer a meaningful form of accountability. At critical junctures, companies will choose to prioritise corporate incentives over safety in the absence of a strong regulatory framework. Given the potential scale of harm which can arise from GPAI models posing systemic risks, as defined in the AI Act, it is essential that the code of practice gets underway in the next months, includes all relevant stakeholders to ensure that developers are held to account, and that the outcomes are made binding via secondary legislation.

Beyond that, we would see it as an urgent priority that the EU AI Office is set up and adequately resourced.  There are similar institutions being set up in the UK, US and beyond with specific remit to conduct independent evaluations. Therefore, ensuring (diverse) expertise to conduct third-party evaluation will be essential. To do this, the AI Office will need to be funded on a par with other institutions which govern other domains in which trust and safety are paramount and where underlying technologies form important parts of national infrastructure (think civil nuclear, civil aviation, medicines, road and rail).

One of your research areas is AI accountability and public participation in AI governance. Have you yet found what makes public participation in AI governance meaningful?

Indeed, this is one of our key focuses. We believe that governing AI will require meaningful involvement of people and communities, particularly those most affected by technologies, so that benefits are equitable and important harms are not overlooked. We have conducted rapid evidence reviews that demonstrate that people want to have a say in ways that impact decision-making. To make this happen in meaningful way, our evidence shows there must be a commitment from decision-makers to embed participatory processes in governance, and for complex or contested topics in particular to get careful and deep public engagement. This engagement should be participatory and deliberative, and not simply consultations or tick-box exercises.

In the EU context, we therefore welcome the inclusion of the AI Office’s Advisory Forum (a body responsible for providing the AI Office with inputs from different stakeholder groups) in the AI Act. If done well, this Forum could be a mechanism to elicit feedback from affected persons on AI questions with societal-level implications (for example, release of large-scale models and areas for their deployment), which has been suggested by AI labs (e.g. OpenAI and Anthropic have participated in ‘alignment assemblies’, which seek to use public opinion to inform criteria for responsible release of models). While experiments of this sort are valuable, we would recommend formalising this through public participation mechanisms that also have regulatory oversight, and that the AI Office Advisory Forum could be a great means to do this.

What’s a good book or podcast on AI that you would recommend to our readers?

Books:

Podcasts:

Recent articles
Welcome aboard, Peggye! 

Please join us in extending a warm welcome to Peggye Totozafy, our newest team member! Peggye steps into the role of Senior Partnerships Manager with a wealth of experience in impact philanthropy, ready to lead our efforts in fostering meaningful connections with partners.

Interview with Sam Jeffers from Who Targets Me: is generative AI changing election campaigns online? 

The European AI & Society Fund spoke with Sam Jeffers from Who Targets Me about how generative AI is changing online political advertising and campaigns