Interview with #ProtectNotSurveil coalition: “Meeting Human Mobility with Care, not Surveillance”

#ProtectNotSurveil Coalition at the European Parliament.

The #ProtectNotSurveil coalition is a group of activists, organisations and researchers working to ensure laws and policies safeguard people on the move from harms emanating from AI systems. Their mission is to challenge the use of digital technologies at different levels of European Union policies and advocate for the ability of people to move and to seek safety and opportunity without risking harm, surveillance or discrimination.* 

 The coalition is currently steered by a core group, including: Chloé Berthélémy, Senior Policy Adviser at European Digital Rights (EDRi); Sarah Chander, Director of Equinox Initiative for Racial Justice; Caterina Rodelli, EU Policy Analyst at Access Now; and Alyna Smith, Deputy Director at Platform for International Cooperation on Undocumented Migrants (PICUM). 


The European AI & Society Fund’s (EAISF) team caught up with Sarah, Chloé, Alyna and Caterina to learn about experiences of people on the move and the deep interconnections between technology and migration, as well as find out about the future plans of the coalition. Read our interview below. 

EAISF: How did this collaboration emerge? And why did you choose the name #ProtectNotSurveil?

Sarah: The coalition emerged out of necessity! In 2020, EDRi began coordinating a broad coalition of civil society seeking to influence the EU’s Artificial Intelligence Act. Before the European Commission released its proposal, we knew we needed a structural approach to the law, one that meaningfully addressed how technology fits into broader systems of oppression. As we were building the wider AI Act coalition, we realised that we needed a specific group to work on AI and migration. Following up from the research conducted by Petra Molnar (Technological Testing Grounds), we built the (quite unimaginatively-named) working group ‘AI migration’, where we collectively analysed available cases of AI harms in migration in Europe, drafted jointed amendments to the AI Act, as well as ran joint advocacy activities. The bulk of this work is about coordinating, sharing knowledge between organisations, attempting to break NGO silos, getting to know each other, and drawing out political alignments. 

Caterina:  As we started to write our policy recommendations to the AI Act, at the end of 2021, we were still working in a quite spontaneous way. We would gather in ad-hoc meetings and, back then, we did not have a fixed group of people that contributed regularly to the work. However, the more we dug into the policy work, the more it was clear that we needed to structure our way of working together, and it had to be intentional. For two main reasons. Firstly, ‘AI in migration’ is not just a use-case. Looking at the AI Act through the migration lenses, means having to deal with the many contradictions of the Regulation: from transparency obligations, the role of the industry, the impact of ‘sandboxes’, to the responsibilities of public authorities and much more. Secondly, we needed to organise in order to resist and challenge the patriarchal approach of EU policymakers, as well as some in the digital rights field, who would claim that ‘migration is not a digital rights issue’ and it was not a priority in the AI Act.    

The name #ProtectNotSurveil came from the need to structurally shift the priorities of EU policy-making, and to reclaim the narrative around the meaning of protection, and safety for all.  #ProtectNotSurveil is a permanent call to the EU institutions: a call for a world where human mobility is met with care, not surveillance. 

EAISF: What have you worked on together that you are most proud of? 

Alyna: Honestly, I’m proud of what we achieved in the European Parliament, whose position reflected a large number of our coalition’s demands to regulate, and in some cases ban, harmful tech used in the migration context. But more than that, I’m proud of the fact that we were not just a group coordinating our respective lobbying efforts: we took the time to develop a shared vision and analysis of what we’re up against and what we’re about – namely, challenging the use of technology in ways that continue to dehumanise, harm and control people at and within our borders. We shared a common framing that made it possible to link with the wider AI coalition work, to draw connections between our fight in the context of migration and efforts against biometric mass surveillance, predictive policing and national security carve-outs, in a way that only added power to our collective message. The final text was a disappointment in many ways, but we laid the groundwork for the collaborative work, advocacy and resistance that will be needed in the longer term.   

EAISF: Why should people consider the role that technology and AI play in migration?  

Chloé: Technologies and migration policies have a mutually reinforcing relationship. On the one hand, the EU increasingly relies on technologies and data-driven tools to achieve their migration management objectives of controlling people’s movements, profiling and swiftly deporting them. A sprawling ecosystem of technological applications is currently being developed and used at the borders and in all migration procedures in order to serve the EU’s broader political agenda of criminalisation and racialised surveillance of migrants. In that sense, they exacerbate human rights violations and enable an inherently violent and discriminatory system. On the other hand, migration contexts constitute a very profitable and permissive space for technological testing and deployment – quite a bargain for the tech industry. Some tech “solutions” are simply in search of a problem. As technologies mirror and power relations in society, it is essential to analyse and contest migration control tech as part of a wider discussion on systemic harms of European migration and border policies.  

Sarah: We cannot divorce questions of surveillance and technology from race, racism and criminalisation. The reason such vast resources are poured into tech in policing and migration, the reason we have such enabling legal frameworks for surveillance in these areas is fundamentally about how Europe conceives and constructs racialised populations. For the institutions, movement is to be equated with criminality and racialised people are to be constructed as a security threat. All of this has a history in the colonial roots of technology for population management. A lens viewing technology through systems of structural racism helps safeguard against mainstream narratives of ‘neutrality’ and ‘technical fixes’ that are often offered as solutions to AI-based harms.  

EAISF: As a coalition, you’ve been outspoken about the dangers of the recently passed EU Migration Pact for people on the move. What does it mean for people looking for a safe passage to Europe?  

Alyna: What the Pact does is essentially codify in law practices that some Member States have been trialing for years – practices that have been called out by activists and civil society as unacceptable and inhumane. These include “speedy” border procedures that quickly sort people into those who qualify for international protection and those who can be swiftly deported, trampling procedural safeguards and linking asylum and return in ways that are deeply problematic and harmful; and the de facto detention in high-tech prison-like facilities of people in screening procedures – as Greece is already doing in EU-funded Closed Controlled Access Centres. For people seeking to come to Europe, the Pact does very little to open new safe regular paths for them to work in dignified conditions or apply for protection, and instead entrenches many harmful practices in law – some of which already, or are likely, to be reinforced through the use of technology.   

Chloé: We see the Migration Pact as a permissive legal framework that warrants and encourages the use of a wide array of harmful technologies, designed to filter, profile, track, detect and control people on the move. The reform of the EURODAC database – the database created to store asylum-seekers’ data and implement the Dublin rules – perfectly illustrates the accelerated shift from an inherently unfair asylum system to a truly hostile and inhumane one. EURODAC is drastically transformed to collect even more data about more people, treat them as criminal suspects by default and track their every move. For example, the database will now collect fingerprints of children as young as 6, will flag persons considered as “security threats” based on arbitrary security assessments, and will use facial recognition to detect irregular status and accelerate deportation procedures. The Pact really brings the surveillance and criminalisation of migrants to the next level. 

EAISF: You were also disappointed by some of the outcomes of the European Artificial Intelligence Act in relation to migrants’ rights – what are your concerns? And how do you plan to address these? 

Caterina: In its final text, the AI Act creates a totally separate legal framework for law enforcement, migration control and national security authorities, allowing them to indiscriminately use supposedly prohibited, as well as ‘high-risk’, AI systems against racialised people and the most marginalised in society. Unfortunately, this is not surprising. Due to the EU’s institutionalised racism, there is a double standard on human rights when migrant people are the rights holder. Let’s just compare the GDPR with the Regulation on the Interoperability of migration databases, where the latter allows for the indiscriminate collection, processing and sharing of personal data in contravention of the right to privacy data protection safeguards. As the AI Act slowly enters into force, we will keep on advocating for it to apply equally to every person, we will document harms stemming from the use of AI systems, and will work with civil society organisations and grassroot movements to challenge the discriminatory nature of this regulation.   

EAISF: What goal are you working towards and how can people support #ProtectNotSurveil?

Sarah: We’re building the post- AI Act strategy of #ProtectNotSurveil to chart our next steps. In the next months, we’ll be working on building our advocacy and movement strategies to (a) influence laws and policies to counter harm and put forward active demands in the field of tech and migration (including working on next steps from the AI Act and Migration Pact) and (b) expand #ProtectNotSurveil to ensure we are accountable and useful to grassroots and migrant led movements. We will have a long way to go here, and our goal is to be better connected with grassroots, migrant led movements doing direct work with communities on the ground, like the Greek Forum of Migrants and the International Women* Space 

EAISF: What books or podcasts on technology and society themes have you enjoyed recently?  

Alyna: I’ve been thinking lately about the “industrial” part of the border industrial complex. Our governments – EU and national – bear the burden of responsibility for the trajectory of policymaking on migration over the last decades. But this agenda has also been shaped by private actors with a deep and vested economic interest in making the case for Fortress Europe. I appreciated reading a recent book Postcoloniality and Forced Migration: Mobility, Control, Agency that takes a look at responses to migration across various geographies through the lens of coloniality and includes chapters that look at how this plays out in terms of the role of state and corporate investments in technology. I also recommend an episode of the podcast Throughline, The Ghost in Your Phone, that looks at the extractivism at the heart of our insatiable appetite for technology. It changed my twelve-year-old’s mind about whether he really needed a smartphone after all.   

Caterina: As we face the rise of fascism in Europe, the perpetration of genocide against the Palestinian people, there is one book that has helped me a lot for navigating these times: Rest is Resistance, a manifesto on the liberating power of rest and daydreaming by Tricia Hersey. I suggest this reading to anyone engaged in policy-related work and EU advocacy. We need to rest to resist the sense of urgency that is imposed on us by fast-pacing policy work and back-to-back deadlines. We need to rest to take the time to imagine what is the world we really want to live in, and reinvent the way we work, putting genuine relationship-building at its core and collective liberation as its objective.  

Chloé: I am generally keen in learning the historical, political roots of technological systems used for policing and surveillance, systematically driven by colonialist logics and structural racism. Lately, I enjoyed the podcast episodes “The Crime Machine” from Replay All, which tell the story of how a data-driven system for predicting crime, rationalising law enforcement, and decreasing crime rates in New York in the 1980s, ‘CompStat’, has pushed the NYPD to intensify their targeting of Black and Hispanic communities. It is a concrete example of ways technologies, built from discriminatory law enforcement policies, shape in turn policing practices. 

Sarah:  I’m thinking a lot these days about the inherent connections between racialised surveillance in Europe and the rest of the world, and how different collectives are resisting. Just as we need to contest harmful surveillance against migrants at Europe’s borders, we need to reckon with how Big Tech equips and facilitates violence across the world. My recommended reading is ‘No Azure for Apartheid’ a report of Microsoft employees documenting the company’s complicity in genocide in Palestine. I would also recommend, for people interested in how to shift to anti-colonial approaches to digital rights, to follow Weaving Liberation an emerging collective with an inspiring programme for Supporting and resourcing digital justice organising in Europe. 


EAISF: Thank you for taking your time to talk about #ProtectNotSurveil!


* #ProtectNotSurveil are Access Now, Equinox Initiative for Racial Justice, European Digital Rights (EDRi), Platform for International Cooperation on Undocumented Migrants (PICUM), Refugee Law Lab, AlgorithmWatch, Amnesty International, Border Violence Monitoring Network (BVMN), EuroMed Rights, European Center for Not-for-Profit Law (ECNL), European Network Against Racism (ENAR), Homo Digitalis, Privacy International, Statewatch, Dr Derya Ozkul, Dr. Jan Tobias Muehlberg, and Dr Niovi Vavoula. 

Recent articles
FAQ: Global Fellowship Programme on AI & Market Power

Our FAQ on the Global Fellowship Programme on AI & Market Power helps prospective applicants to navigate the application process easier.

Welcome Johanna! 

We are excited to welcome Johanna Pruessing to our team as our Senior Programme Manager. With years of experience in grant-making and strategic leadership at the intersection of technology and human rights, Johanna will spearhead the Fund’s programmatic strategy development.