How AI-driven welfare systems are deepening inequality and poverty across Europe
At a recent briefing for funders hosted by the European AI & Society Fund and Ariadne Network, our grantee partners Likhita Banerji, Deputy Director, Amnesty Tech, Amnesty International and David Cabo, Director of Fundación Ciudadana Civio, shared how automated decision-making systems and algorithms used by European governments are entrenching poverty and inequality, rather than helping make societies fairer.
The welfare systems set up by European governments after World War II to rebalance their societies provide often essential help to people facing adversity because of systemic injustice or circumstance.
But amid a political push towards austerity and particular hostility towards support for migrants and other marginalised groups, many European governments are adopting AI systems to decide who is eligible to receive benefits, detect who is most likely to commit welfare fraud, and ‘streamline’ benefit claim processes.
“No government wants to get left behind in the great AI race. So it’s being deployed without proper testing, reinforcing systemic issues and bias. And the strong xenophobic narrative – anti-immigrant, anti-foreigner – is helping governments justify the need for fraud detection using AI. ” Likhita Banerji, Amnesty Tech.
Amnesty Tech and Civio have been investigating AI-powered welfare systems over the last few years, finding algorithms have been causing widespread damage and ironically, exacerbating inequality and poverty – contrary to the original goals of Europe’s welfare systems.
In 2023, Amnesty International investigated a World Bank funded system in Serbia which introduced automation, comprising a data-driven system, into the process of determining eligibility for social assistance, based on data such as income, age, household composition, health status, employment status.
However, many of the databases contained false or incomplete information which meant the automated system was making inaccurate decisions about who was eligible for social assistance. Many people who were already receiving it were suddenly stripped of support, particularly Roma and people with disabilities. Similar exclusionary impacts have been documented in World Bank sponsored projects elsewhere too.
Discriminating against racial and ethnic minorities in Netherlands and denying childcare benefits
Meanwhile in the Netherlands, Amnesty international published a headline-grabbing report about a childcare benefits scandal which shook the country. Since 2013, the Dutch tax authorities had been using an algorithmic decision-making system to profile individuals applying for childcare benefits, to detect inaccurate and potentially fraudulent applications at an early stage. The system classified any non-Dutch national as ‘higher risk’, reinforcing the bias between race, ethnicity and benefits fraud.
Tens of thousands of parents and caregivers were falsely accused of fraud and had their child benefits suspended. They were subjected to hostile investigations, characterised by harsh rules and policies, rigid interpretations of laws, and ruthless benefits recovery policies. This led to devastating financial problems for the families affected, ranging from debt and unemployment to forced evictions because people were unable to pay their rent or make payments on their mortgages. Others were left with mental health issues and stress on their personal relationships, leading to divorces and broken homes.
Algorithms wrongly flagging lawful citizens as benefit fraudsters in Denmark, Sweden and France
Systems elsewhere have displayed similar problems. Over three years, our grantee partner Lighthouse Reports uncovered how Sweden’s Social Insurance Agency — tasked with running Sweden’s social security – had wrongly been flagging benefit claimants as fraudsters. Their analysis revealed that the agency’s fraud prediction algorithm discriminated against women, migrants, individuals with foreign backgrounds (such as those born overseas or with parents born in other countries), low-income earners and people without a university education. In France, the government is using a similar algorithmic system which is discriminating against the most vulnerable and flagging them as potential criminals.
And in Denmark – a country lauded for its generous social safety net – data-driven fraud control algorithms risk discriminating against low-income groups, racialized groups, migrants, refugees, ethnic minorities, people with disabilities, and older people, according to a two-and-a-half year investigation by Amnesty Tech. They argue that by flagging ‘unusual’ living situations such as multi-occupancy, intergenerational households, and ‘foreign affiliations’ as indicators of higher risk of benefit fraud, the government has used social scoring – a practice which is prohibited under the European Union’s recently passed AI Act.
Likhita says Amnesty Tech tried to engage the Danish authorities to address their demands to improve the system. “But rather than be open to our recommendations, the government’s agency blocked us again and again.”
Demanding transparency on how AI is deciding who is eligible for energy subsidies in Spain
In Spain our grantee partner Civio has been investigating an automated decision-making system called BOSCO in determining energy subsidies. They found that the system is incapable of understanding intersectionality of need. It contained at least two design flaws which caused it to reject legally eligible applicants: for example, it would deny aid to some widow’s pension recipients even if they met the income requirements. Slowly Civio has been uncovering how decisions are made and fighting for more transparency about the use of AI in welfare decisions across Spain. This month, Civio was invited to the Supreme Court to present their case to get access to the source code behind the system.
David Cabo of Civio points to some instances of AI supporting the delivery of welfare services. “In Barcelona, the local authorities used AI to cluster citizen concerns which probably saved time. And using Large Language Models transcribe citizen services’ conversations can be useful for future reference (although these would need to be proofed by a human for accuracy).”
But he warns
“Overall welfare decisions should be made by humans. They are better at understanding someone’s complex life situation, rather than assessing their right to benefits on some outdated information points.” David Cabo, Civio.
Both Civio and Amnesty Tech are working to increase public awareness of these issues. “Although people are concerned about mass surveillance, the public are not making the connection with AI used in public services,” says Likhita Banerji. Amnesty has released documentaries that focus on the human cost of these automated decisions as well are publishing easy to read reports. Civio creates data visualisations and tools for people to play with algorithms, such as a prison release decision making system, and see for themselves how discriminatory they are.
The European AI & Society Fund organises regular briefings for funders to explore how AI is shaping society across a range of issues. If you would like to receive an invitation to these briefings, please email info@europeanaifund.org with your name and affiliation.