European Civil Society on the AI Act deal

[we are updating this blog as more reactions from civil society come in; last updated on 9 January 2024]

 

European policymakers have reached a political agreement on landmark regulation for Artificial Intelligence. Civil society organisations have worked tirelessly through the development of the EU’s AI Act to secure fundamental rights and promote the public interest. The final stages of negotiation were fraught and culminated in a 36 hour negotiating marathon to find a compromise between different positions.

Although the deal was announced as a triumph, in practice many details are yet to be thrashed out by the Institutions’ advisors and bureaucrats before the text is sent back to legislators for formal approval. Civil society experts warn that the full effects of the draft regulation will not be clear until these are decided.

 

 

We have compiled the initial reactions and analysis from European civil society organisations, mostly our grantee partners. Go to the bottom of this blog to see a list of media that featured a commentary from our grantees.

Biometric identification and emotion recognition: no full bans

European Digital Rights (EDRi) stated in a Press Release that the deal includes a ban on live public facial recognition with several exceptions that open a path to misuse these systems in dangerous, discriminatory and mass surveillance ways. These exceptions are believed to be made for law-enforcement for fighting crime (still to be defined), but open far-reaching loopholes in terms of impact on people.

Access Now commented, referring to Commissioner Thierry Breton’s statement, that a ban on remote biometric identification with three exceptions is anything but a full ban, calling  it a “guidebook on how to use a technology that has no place in a democratic, rights-based society”. At the same time, Access Now warned that it’s hard to assess how far the bans and exceptions reach, and for that seeing the final legal text is paramount.

Access Now’s Daniel Leufer explained to WIRED that both live and retrospective biometric identification “destroy anonymity in public spaces.” He explained that real-time biometric identification can identify a person standing in a train station right now using live security camera feeds, while “post” or retrospective biometric identification can figure out that the same person also visited the train station, a bank, and a supermarket yesterday, using previously banked images or video.

EDRi also criticised that the deal only bans emotion recognition systems  in workplaces and educational settings. According to organisation, this illogically omits the most harmful uses of all: those in policing and border and migration contexts.

It’s hard to be excited about a law which has, for the first time in the EU, taken steps to legalise live public facial recognition across the bloc. Whilst the Parliament fought hard to limit the damage, the overall package on biometric surveillance and profiling is at best lukewarm. Our fight against biometric mass surveillance is set to continue.”

Ella Jakubowska, Senior Policy Advisor, European Digital Rights

Predictive Policing

FairTrials noted that the partial ban on ‘predictive’ policing and crime prediction systems is significantly weaker than the ban voted by the European Parliament in June. The provisional agreement prohibits some systems that make predictions about individuals based on “personality traits or characteristics”. However, it does not ban geographic crime prediction systems used widely by police forces across Europe, despite evidence that these systems reinforce existing racism and discrimination in policing.

Fundamental Rights Safeguards and Redress

AlgorithmWatch commended the fact that mandatory fundamental rights impact assessment and public transparency duties for deployments of high-risk AI systems are included in the deal, thanks to the advocacy efforts of civil society organizations. However, AlgorithmWatched warned that at the same time the AI Act contains major loopholes, such as the fact that AI developers themselves have a say in whether their systems count as high-risk. Center For Democracy and Technology Europe (CDT Europe) in their AI bulletin also raised serious concerns about the self-assessment and argued that instead, a more robust approach would be to ask that the fundamental rights impact assessment is submitted as proof of the system not being high-risk.
While the European Center for Not-for-Profit Law (ECNL) explained that the fundamental rights impact assessments will not be meaningful if the final text does not specify the criteria for the assessments or mandate the European Commission to develop guidelines in meaningful consultation with civil society. ECNL has made extensive proposals for such criteria.

Organisations celebrated the fact that people affected will also have the right to an explanation when their rights are affected by a decision based on a high-risk AI system and will be able to launch complaints about them.

However, Mher Hakobyan, Advocacy Advisor on Artificial Intelligence at Amnesty International, stated that the EU “failed to ban the export of harmful AI technologies, including for social scoring, which would be illegal in the EU,” establishing double standards.

General Purpose AI

Kris Shrishak, Senior Fellow at the Irish Council For Civil Liberties’ (ICCL) gave a comment to el Dario that General Purpose AI (GPAI) or AI systems without a specific intended purpose will mainly face transparency obligations. Apparently, the provisional deal does not foresee risk assessments, cyber security requirements or third-party assessments. Only GPAI systems that use a certain amount of compute power of more that 10^25 FLOPs will face stricter requirements, such as incident reporting and cybersecurity obligations. He noted that in practice it means that currently only OpenAI’s GPT-4 is above this threshold (Google does not provide FLOP information on Gemini). Additionally, the European Commission will be able to designate additional GPAI based on other criteria, such as the number of business users.

Matthias Spielkamp, Executive Director of AlgorithmWatch, welcomed the transparency requirements, including energy consumed by GPAI, but added they had “strongly advocated for much further-reaching obligations, including on protecting the rights of click workers, strengthening the rights of individuals affected, and ensuring accountability across the value chain.”

AI in healthcare

Health Action International (HAI) Research Officer Janneke Van Oirschot told Inside AI Policy that the AI Act  won’t address all patients’ needs and concerns in healthcare because it’s a horizontal, not a sector-specific legislation. She explained that AI applications often used for clinical care could potentially not be considered as high-risk systems: “A lot of the AI that we see now being used in this context is not classified as medical device and therefore won’t be regulated as high-risk, therefore, there are no regulatory requirements and no conformity assessments for these systems. You could think of AI systems for surveillance of behaviour or lifestyle monitoring, and all kind of smart technology such as incontinence material and smart beds, which collect extremely sensitive data, and whose malfunctioning can harm patients. We think these systems should not remain unregulated.”

HAI hopes that some of the shortcomings can be addressed through AI standards that are being currently developed, calling for sector-specific standards for AI in healthcare.

Special attention to deepfakes

Dr Julia Slupska, Head of Policy, Campaigns and Research at our partner organisation Glitch reflected on the agreed measures and their contribution to addressing online abuse, telling us that “Glitch welcomes the inclusion of deepfakes in the transparency measures in the political agreement on AI Act: labelling deep fakes and other AI generated content is a crucial first step. However, labelling itself is not sufficient to address the harms of deepfakes, particularly deepfake abuse and pornography. As we see in the example of dedicated deepfake pornography websites like Mr. Deepfakes (see related campaign), simply disclosing that something is artificially generated does not sufficiently prevent or mitigate the harms of deepfakes to women and people minoritised on the basis of gender, particularly to those in the public eye and those who come from Black and minoritised communities who are particularly affected by hypersexualisation online.”

What now?

Many details are yet to be thrashed out, and nothing is settled until the final text has seen the light of day, which still can take a few months.  However, it is clear that much was achieved for people and society thanks to the tireless organising and advocacy by hundreds of civil society organisations over several years.

The pressure on lawmakers, especially the European Parliament negotiators, remains high to avoid the further erosion of human rights and safeguards, anticipating continued lobbying from tech companies to lighten the obligations and national governments pushing for even greater exceptions for law enforcement.

“What we can already say now: The AI Act alone will not do the trick. It is just one puzzle piece among many that we will need in order to protect people and societies from the most fundamental implications that AI systems can have on our rights, our democracy, and on the societal distribution of power.”

Angela Müller, Head of Policy & Advocacy, AlgorithmWatch

Once the legal text will be made public, Access Now has urged stakeholders to closely scrutinise the following three issues:

  • Exemptions on national security,
  • Transparency obligations for law enforcement and migration authorities, and
  • List of high-risk systems.

We will keep working with and supporting groups of digital rights and social justice organisations striving to align AI with fundamental rights and draw the line where it should be prohibited, so that it serves people and society, not just the powerful few.

We are also developing our strategy for the implementation phase of the AI Act and the role of civil society to make it work. Please contact us if you’d like to discuss it.

Our grantees’ commentary on AI Act provisional agreement in the media

[the list is being updated]

Ada Lovelace Institute (in English):

Access Now (in English and Spanish):


EDRi (in English):


Irish Council for Civil Liberties (in English, German and Spanish):


Panoptykon Foundation (in Polish):

 

Recent articles
Grantee interview with FEMYSO: Work on inclusion requires patience, trust and flexibility

As Europe witnesses a rise of anti-democratic forces, questions around inclusion and representation are becoming more acute. In this conversation we discuss the impacts of technologies on the Muslim community and the importance of supporting organisations working with communities on the ground.

Luminate provides $1 million to grow our work

The European AI & Society Fund has received a $1 million grant from Luminate to advance its work in ensuring that Artificial Intelligence is developed and deployed in Europe in ways that prioritise the public good, human rights, and social justice.