How the AI accountability Community of Practice helps collectively shape the direction of AI in Europe

Last month at the first face-to-face meeting of the AI Accountability Community of Practice, around 20 European AI & Society Fund grantees shared their approaches to participatory community engagement, data requests, investigations, AI-harm reporting, and strategic litigation.  

Running into 2026, the Community of Practice brings civil society actors from across our grantee community to share knowledge and build collective power to make sure governments and tech companies are held responsible when AI causes harm.  

Although the participants hail from diverse countries – France, the Netherlands, UK, Spain, Germany, Ireland, Croatia – and their organisations are tackling everything from racial justice to consumer rights, what they all have in common is the drive to hold institutions, states, and tech corporates accountable for how AI is being deployed. 

Why did we start a Community of Practice? 

The European AI & Society Fund is in a unique position to bring civil society actors from across Europe, and across causes, together to shape AI. 

Lots of our grantees are experimenting with new approaches and testing legislation for the first time. We think it’s important we learn from one another’s successes and failures so that we start to identify effective tactics to strengthening accountability – whether working to curb the impacts of data centres on local communities, using litigation to fight the unfair treatment of delivery riders, or pushing back against the growing surveillance of Black communities. 

We wanted to complement our grant funding with a broader peer support group to collectively face the fast-moving and enormous challenge of AI accountability. The Community of Practice was born out of desire for to hold space for civil society practitioners over a longer period of time, as people’s needs, strategies and learnings evolve in this emerging field, in addition to the other field and capacity building activities we offer. 

In our October meeting, grantees explored: 

  • Why it’s important to design accountability strategies with affected communities and what new insights does it provide into lived experience 
  • Pathways for redress people have under the European AI Act and GDPR when facing AI harms 
  • What investigation methods work to uncover how AI is used, when it’s so untransparent and secretive 

Putting affected communities at the heart of AI accountability frameworks 

With such a diverse group of participants, there is a risk different levels of understanding of AI accountability and different approaches might divide the room. But we found quite the opposite in practice.  

Black Learning Achievement and Mental Health (BLAM UK), shared how they are engaging affected communities to investigate the London police’s use of AI-enhanced surveillance and predictive policing algorithms. They found that traditional evidence gathering through Freedom of Information requests, statistics, and desk research failed to capture the emotional, social, and psychological harm that people experience from surveillance.  

That’s why their research is purposefully inclusive, with a particular focus on the impact of AI-driven surveillance on vulnerable members of the community such as children, the elderly and people with disabilities, and they share their findings in accessible, culturally contextual and relevant ways.  

BLAM’s community-inclusive accountability framework inspired other organisations in the room to reflect on how they currently hear from affected communities, and what evidence of harm they may be overlooking.  

“Accountability frameworks must redistribute, not just monitor, institutional power. For that, people need to be included.” BLAM UK 

Sharing accountability avenues for redress from AI harms 

The Center for Democracy and Technology (CDT) shared their analysis on practical ways to hold AI systems accountable using EU legislation including GDPR and the AI Act.  

A key recommendation included using the GDPR for redress where possible, as the AI Act does not offer much yet in terms of remedies. Under GDPR, people have the right to compensation (and many other rights). Under the AI Act, people have a right to an explanation and to lodge a complaint with the market surveillance authority – but they have no obligation to deal with complaints. 

Although this was a much more technical discussion about legislation and legal levers, the practical interrogation of accountability systems appealed to both policy-minded organisations and those stemming from a social justice background.  

Learning in a changing environment 

While civil society organisations across Europe are getting up to speed with the accountability levers available in Europe’s new landscape of tech regulation, these laws are already at risk from a political push towards deregulation.  

By building the collective strength of different groups through the AI accountability Community of Practice we hope they will be better able to navigate this complex and fast changing landscape and push towards AI that serves the interests of people, society and planet.  

Recent articles
How AI-driven welfare systems are deepening inequality and poverty across Europe

Our grantee partners Amnesty Tech and Civio, share how automated decision-making systems and algorithms used by European governments are entrenching poverty and inequality, rather than helping make society fairer.

How our AI & Market Power Fellows are fighting back against tech oligarchy

Find out how the AI & Market Power Fellows are studying and explaining how the global rush to large-scale AI positions a handful of corporations to dominate the digital infrastructure of the future.