Interview with Sarah Chander and Ella Jakubowska from EDRi

 

The EDRi network is a dynamic and resilient collective of NGOs, experts, advocates and academics working to defend and advance digital rights across the continent. Since this month EDRi is celebrating its 20th anniversary, The European AI & Society Fund spoke to Sarah and Ella about the past, what keeps them busy these days and what the future in the digital rights field holds.

Sarah leads EDRi’s work on the AI Act and the Decolonising Digtal Rights work from EDRi’s side. She has a background in anti-racist organising.
Ella is a self-described policy nerd who works on regulating the use of biometric data and illegal content online. Trained in feminist approaches to tech, society and human rights, she tries to bring an intersectional feminist lens to all her work.

Last week EDRi hit the 20-years mark. EDRi has been around to see the trends and transitions in the digital field. How both the approaches and threats to digital rights have transformed through this time?

Sarah: From contesting expanding forms of state surveillance, the dominance of Big Tech’s business model, threats to encryption and the privacy of our communications, to arguing for red lines on the most harmful and oppressive uses of AI, EDRi is kept very very busy. As EDRi Director Claire Fernandez recently told Politico, we are very much the underdog up against these vast exercises of power. At least from my perspective, what is changing in how we work is an increasing recognition that we cannot contest this alone, and that actually, the field needs to change in order to better connect with other social, racial and economic justice movements.

What is your highest priority issue at the moment for the European AI Act negotiations?

Sarah: It’s very hard to choose as we are constantly juggling between pushing our positive asks relating to how the AI Act can protect people from harmful systems, but also contesting changes that would undermine the AI act, like changing the risk-classification process to give full discretion to tech companies to decide whether their own system is high risk. But, if you would force me to choose, I would say our calls for bans on predictive policing and uses of AI in migration are high priority because they seek to prevent harms that would be specifically felt by racialised people – those most affected by criminalisation and surveillance.

Ella: For me, the priority is outright banning harmful biometric surveillance systems (such as facial recognition in public spaces). This is something I have been fighting for since I joined EDRi three and a half years ago. When we first demanded a ban in EU law, a lot of people told us that we were asking for the impossible. But a huge amount of work from our amazing network of members and volunteers – which stands on the shoulders of anti-surveillance and anti-racism activists around the world – has really shifted the political and public debate.

EDRi recently in an open letter warned against the draft French law on the Olympics and Paralympics 2024, saying that it would enable algorithmic mass surveillance. Could you walk us through what it means in practice and what is at stake?

Ella: Algorithmic mass surveillance is one of a spectrum of techniques that police, local authorities and companies deploy in order to use our faces, our bodies, our voices and our movements against us. These systems use cameras or sensors to identify who we are, what we are doing, and in some cases, even claiming to know our inner thoughts and intentions.

In the case of the French Olympics and Paralymics, there is a desire from the French government to identify suspicious people or objects before crimes happen. There’s a fallacy that it is necessary for security. But in addition to being deeply invasive, there’s no evidence that it actually works. In fact, these systems often do the opposite: they embed deeply flawed and discriminatory perceptions of who in our socieities gets considered suspicious. So it begs the question: whose security are we really talking about?

We often hear the industry and policymakers talking about trustworthy AI. Is there such thing and is “trustworthiness” the appropriate lens to look at the AI?

Sarah: “Trustworthy AI” has to be seen within a wider context of ethics washing spearheaded by large technology companies to co-opt debates on technology regulation. By placing on the table terms like trustworthy AI, companies implicitly ask us to accept the premise that technology is neutral. They tell us that trustworthy AI is a possibility, we just need to make the right technical fixes, tweak the system here and there, and everything will be lovely.
Actually, AI systems must be seen as part of the wider big tech business model of domination, extending into all public domains, extracting from consumers, from labour and of course from the environment. AI systems in itself fit into wider systems of racial capitalism, meaning that there is no way to perfect or improve these systems, especially when they used in wider systems of surveillance, criminalisation, punishment and control, specifically of racialised and marginalised communities. There can be no ‘trustworthy’ predictive policing system, for example. Those systems, if working technically perfectly, will simply be more efficient instruments of discrimination and control.

What else keeps you busy?

Ella: Professionally speaking, I’ve got quite a full plate dealing with one of the most dangerous and misguided legislative proposals that I have ever seen: the draft EU regulation to tackle child abuse online. On a more personal (and less apocalyptic) level, I recently took up clothes-making. I’m currently halfway through sewing a neon eighties-inspired babygrow for my friend’s new baby.

Sarah: I’m mostly working with incredible coalitions of orgs and activists to influence the AI Act, specifically from perspectives that centre justice. We are a small group of people but pretty mighty, and I’m in awe of the tenacity of our demands and what we achieved so far. I am also working with Laurence Meyer and Joel Hide on the DFF-EDRi Decolonising process, a project initiated by Nani Jansen Reventlow, exploring how the digital field can be in better service to community. Outside of EDRi I work with a team of amazing people at Equinox Initiative for Racial Justice, advocating on racial justice related issues. All of this is largely prevents me from getting my nails done on a regular basis.

And lastly, can you share what is your current go-to book or podcast diving in the workings of the digital world?

Ella: Because our jobs are so intense, I actively avoid entertainment and culture that relates to my work and instead focus on my first love: contemporary fiction. But I’ve heard a lot of praise for the Timnit Gebru episode of Paris Marx’s podcast Tech Won’t Save Us, so maybe I’ll break my own rule for that.

Sarah: ‘Beyond Digital Capitalism: new ways of living’ was a really good read and is available online. Also, Global Data Justice just released a podcast and a playbook called ‘Resist and Reboot’ which talks about different forms of solidarity in digital rights work.

Recent articles
Launching our new Build & Breakthrough initiatives to meet the AI challenge

The European AI & Society Fund is launching Build & Breakthrough, two new initiatives to empower public interest advocates across Europe to drive the agenda on AI. Bringing an additional €10m into the field over the next 18 months, we aim to scale philanthropy’s response.

Announcing €2m further grantmaking to support work on AI & Society  

We are pleased to be able to support 21 organisations within our community with further funding until December 2025. This commitment totals over €2 million and will ensure that diverse public interest organisations have the capacity to shape Artificial Intelligence.