How AI-driven welfare systems are deepening inequality and poverty across Europe

At a recent briefing for funders hosted by the European AI & Society Fund and Ariadne Network, our grantee partners Likhita Banerji, Deputy Director, Amnesty Tech, Amnesty International and David Cabo, Director of Fundación Ciudadana Civio, shared how automated decision-making systems and algorithms used by European governments are entrenching poverty and inequality, rather than helping make societies fairer.

 

The welfare systems set up by European governments after World War II to rebalance their societies provide often essential help to people facing adversity because of systemic injustice or circumstance. 

But amid a political push towards austerity and particular hostility towards support for migrants and other marginalised groups, many European governments are adopting AI systems to decide who is eligible to receive benefits, detect who is most likely to commit welfare fraud, and ‘streamline’ benefit claim processes.  

“No government wants to get left behind in the great AI race. So it’s being deployed without proper testing, reinforcing systemic issues and bias. And the strong xenophobic narrative – anti-immigrant, anti-foreigner – is helping governments justify the need for fraud detection using AI. ” Likhita Banerji, Amnesty Tech.   

Amnesty Tech and Civio have been investigating AI-powered welfare systems over the last few years, finding algorithms have been causing widespread damage and ironically, exacerbating inequality and poverty – contrary to the original goals of Europe’s welfare systems. 

In 2023 Amnesty Tech (part of Amnesty International) investigated a World Bank funded system in Serbia which creates profiles for people applying for social assistance, based on data such as income, age, household composition, health status, employment status.  

However, many of the databases contained false or incomplete information which meant the AI system was making inaccurate decisions about who was eligible for social assistance. Many citizens who were already receiving it were suddenly stripped of support, particularly Roma and people with disabilities. Similar exclusionary impacts have been documented in World Bank sponsored projects in Jordan, Lebanon, Haiti, Nigeria, Morocco and Angola, as well as in neighbouring Montenegro and Bosnia and Herzegovina too. 

Discriminating against non-Dutch nationals and denying childcare benefits 

Meanwhile in the Netherlands, Amnesty Tech uncovered a childcare benefit scandal in 2021 which shook the country. Since 2013, the Dutch tax authorities had been using an algorithmic decision-making system to profile individuals applying for childcare benefits, to detect inaccurate and potentially fraudulent applications at an early stage. The system classified any non-Dutch national as ‘higher risk’, reinforcing the bias between race, ethnicity and crime. 

Tens of thousands of parents and caregivers were falsely accused of fraud and had their child benefits suspended. They were subjected to hostile investigations, characterised by harsh rules and policies, rigid interpretations of laws, and ruthless benefits recovery policies. This led to devastating financial problems for the families affected, ranging from debt and unemployment to forced evictions because people were unable to pay their rent or make payments on their mortgages. Others were left with mental health issues and stress on their personal relationships, leading to divorces and broken homes.  

Algorithms wrongly flagging lawful citizens as benefit fraudsters in Denmark, Sweden and France 

Systems elsewhere have displayed similar problems. Over three years, our grantee partner Lighthouse Reports uncovered how Sweden’s Social Insurance Agency — tasked with running Sweden’s social security – had wrongly been flagging benefit claimants as fraudsters. Their analysis revealed that the agency’s fraud prediction algorithm discriminated against women, migrants, individuals with foreign backgrounds (such as those born overseas or with parents born in other countries), low-income earners and people without a university education. In France, the government is using a similar algorithmic system which is discriminating against the most vulnerable and flagging them as potential criminals. 

And in Denmark – a country lauded for its generous social safety net – data-driven fraud control algorithms have been discriminating against low-income groups, racialized groups, migrants, refugees, ethnic minorities, people with disabilities, and older people, according to a two-and-a-half year investigation by Amnesty Tech. They argue that by flagging ‘unusual’ living situations such as multi-occupancy, intergenerational households, and ‘foreign affiliations’ as indicators of higher risk of benefit fraud, the government has used social scoring – a practice which is illegal under the European Union’s recently passed AI Act. 

Likhita says Amnesty Tech tried to engage the Danish government to improve the system. “But rather than be open to our help, the government’s agency blocked us again and again. This also happened to Lighthouse Reports when they investigated the welfare system in Sweden. They were stonewalled by the agency responsible who used confidentiality as an excuse for secrecy – even though some of the data was available in their annual reports.” 

Demanding transparency on how AI is deciding who is eligible for energy subsidies in Spain 

In Spain our grantee partner Civio has been investigating an automated decision-making system called BOSCO in determining energy subsidies. They found that the system is incapable of understanding intersectionality of need, which is leading to hundreds of thousands of people who are eligible and vulnerable – such as older people and those on lower-income – being denied help with energy bills. Slowly Civio has been uncovering how decisions are made and fighting for more transparency about the use of AI in welfare decisions across Spain. This month, Civio presented their case to the Supreme Court to get access to the source code behind the system. 

David Cabo of Civio points to some instances of AI supporting the delivery of welfare services. “In Barcelona, the local authorities used AI to cluster citizen concerns which probably saved time. And using Large Language Models to transcribe conversations about welfare benefits can be useful for future reference (although these would need to be proofed by a human for accuracy).”  

But he warns

“Overall welfare decisions should be made by humans. They are better at understanding someone’s complex life situation, rather than assessing their right to benefits on some outdated information points.” David Cabo, Civio.

Both Civio and Amnesty Tech are working to increase public awareness of these issues. “Although people are concerned about mass surveillance, the public are not making the connection with AI used in public services,” says Likhita Banerjee. Amnesty has released documentaries that focus on the human cost of these automated decisions as well are publishing easy to read reports. Civio creates data visualisations and tools for people to play with algorithms, such as a prison release decision making system, and see for themselves how discriminatory they are.  

The European AI & Society Fund organises regular briefings for funders to explore how AI is shaping society across a range of issues. If you would like to receive an invitation to these briefings, please email info@europeanaifund.org with your name and affiliation.  

Recent articles
How our AI & Market Power Fellows are fighting back against tech oligarchy

Find out how the AI & Market Power Fellows are studying and explaining how the global rush to large-scale AI positions a handful of corporations to dominate the digital infrastructure of the future. 

Cultivating our collective strength – the European AI & Society 2025 community meeting

At our annual meeting of grantees and funders the European AI & Society Fund celebrated our successes, acknowledged our difficulties and discussed tactics to achieve our collective mission to turn the tide on AI.