Interview with Alyna Smith, PICUM

 

The Platform for International Cooperation on Undocumented Migrants (PICUM) is a network organisation that seeks a world where human mobility is recognised as a normal reality, and where all people are guaranteed their human rights regardless of migration or residence status. Bringing together a broad range of experience and expertise, PICUM generates and coordinates a humane and informed response to undocumented migrants’ realities and provides a platform for its members and partners to engage policy makers and the general public in the full realisation of their rights. Founded in 2001 as an initiative of frontline organisations to mobilise support for undocumented migrants, today PICUM leads a network of 167 civil society organisations in 35 countries. 

PICUM sets out to bridge the gap between the work of migrants’ rights and digital rights advocates focusing on the criminalisation of undocumented people and the growing use of technology to monitor, identify and surveil them to facilitate increased deportations, often justified on national security grounds. PICUM aims to influence policy debates and public discourse on EU and national level and develop its advocacy objectives concerning the intersections of migration policy, migrant’s rights and digital rights/AI.  

To find out more about PICUM’s work and the challenges of AI in the migration context, we have talked to Alyna Smith, PICUM’s Advocacy Officer.  

Why are you working on digital and tech more broadly? 

Alyna: PICUM’s work focuses on the rights of undocumented migrants. Our entry point into digital rights came through our longstanding advocacy on the “firewall”, which challenges the ways that immigration control finds its way into other sectors resulting in undocumented people who access important services like health care being exposed to possible deportation. Ahead of the adoption of the GDPR, we started looking more closely at privacy and data protection perspectives on this work, including how the EU protections for data protection could be used to strengthen social rights.  

Although we still consider ourselves relative newcomers to this space, we see the importance of engaging as much as possible because of the growing use of information technology and digital tools for immigration control in ways that are discriminatory and non-transparent; that increase the ability of law and immigration enforcement authorities to surveil, identify, apprehend and deport people at the border or already living in Europe, including through discriminatory profiling practices, while making it harder for those most affected to defend their rights. This will have an inevitable impact on our network and on undocumented people themselves, so there’s a need for us to stay abreast as best we can of relevant developments; to inform, empower and engage our members and partners on these issues; and, wherever possible, to advocate with others for change.  

Why are you working on AI specifically? 

Alyna: We’re concerned about the use of AI tools in immigration procedures – from assessing a person’s dialect in asylum procedures to measuring a person’s “risk” of respecting the conditions of stay in a visa application – in ways that entrench discriminatory assumptions based on a person’s nationality, race, ethnicity, gender or age, among others, and that give a false perception of objectivity that leads to disproportionate reliance on its results. We’re concerned that, although the EU’s new proposed AI regulation recognises certain uses of AI in the migration field as “high risk”, the regulation wouldn’t apply to the EU’s own migration and asylum information systems.  

We are keen to work with other organisations to deepen awareness of specific uses of AI that affect undocumented people across a range of contexts, including uses in the employment context, with the aim of better supporting our members and contributing to collective advocacy around the EU AI Act.  

Why do we need an organisation like PICUM in these debates? 

Alyna: PICUM is the only European network focused exclusively on the rights of undocumented migrants. Our work spans their realities and rights in the context of migration policy and borders as well as in the broader context of their lives and experiences within Europe, where millions of people with irregular residence status live and work. We therefore bring a perspective that encompasses the policing and surveillance dimensions of AI and digital rights via immigration systems, as well as in relation to access to justice, social rights and rights in the world of work.   

In relation to policing, where there are particular concerns about the use of AI and invasive technologies, we see through our work the parallels with immigration systems as well as the compounded impact of ethnic and racial profiling for people with precarious residence status, for whom interactions with the police have the added consequence of potential deportation.   

In your view what are the main risks and opportunities of AI and tech in the migration space more broadly? 

Alyna: At the EU level, we see undocumented people being routinely exempted from rights and protections that apply to everyone – including in relation to digital rights. Barely one year after the coming into force of the GDPR, the EU adopted regulations making existing (and planned) migration and asylum databases interconnected to support efforts to address irregular migration and “serious crimes”, reinforcing false and discriminatory assumptions and trampling on key principles of data protection. As noted above, while on the one hand recognising certain migration-related uses of AI as “high risk”, their use in the EU’s migration and asylum data has nonetheless been exempted under the AI Act. The conflation of immigration enforcement and law enforcement or security aims – as is the case with the so-called “interoperability regulations” – makes it easier for policymakers to carve out these types of exceptions, and harder to counter them.  

The scale, complexity, and opacity of the use of technology by authorities – or on behalf of authorities by private actors – make it extremely difficult to anticipate, monitor, respond to and mitigate their negative impact on people most affected. It also presents challenges to inform people about their use and potential impact.  

The opportunity we see is for unprecedented mobilisation across sectors – digital rights, migrants’ rights, anti-racist, anti-discrimination activists – to address these interconnected issues that none of us is in a position to tackle alone.   

Would you like to share anything you have learned so far? 

Alyna: The problematics around the use of AI and digital tools in the migration context cannot be separated from the problematics around the migration context itself, which is defined by policies embodying and reinforcing global inequalities and hierarchies based on race, ethnicity, gender, and class. This means we cannot address the potential harms of the application of technology in this context without also recognising, naming, and tackling these underlying issues. This is therefore about the long game, and sustained action to address inequality.   

 

 

Recent articles
Welcome aboard, Peggye! 

Please join us in extending a warm welcome to Peggye Totozafy, our newest team member! Peggye steps into the role of Senior Partnerships Manager with a wealth of experience in impact philanthropy, ready to lead our efforts in fostering meaningful connections with partners.

Interview with Sam Jeffers from Who Targets Me: is generative AI changing election campaigns online? 

The European AI & Society Fund spoke with Sam Jeffers from Who Targets Me about how generative AI is changing online political advertising and campaigns