The EU AI Regulation – comments and analsysis from organsations we support

It’s been a busy few weeks for tech policy in Europe. In April, the EU Commission unveiled the world’s first-ever legal framework on Artificial Intelligence.

Even though the proposal contains some major improvements over previous drafts – notably ban of some uses of AI that lead to unacceptable risks – the response from civil society on the proposal has been luke warm. As the legal experts Michael Veale and Frederik Borgesius have summarised it  in their in-depth analysis: “big loopholes, private rulemaking, powerful deregulatory effects”

Two key concern is that the proposal lacks protection from discrimination and mass surveillance. EDRi, together with 60+ human rights organisations and 116 MEPs, asked the European Commission to follow through on their promise of creating a truly people centred AI regulation:

“The majority of requirements in the proposal naively rely on AI developers to implement technical solutions to complex social issues, which are likely self assessed by the companies themselves. In this way, the proposal enables a profitable market of unjust AI to be used for surveillance and discrimination, and pins the blame on the technology developers, instead of the institutions or companies putting the systems to use.”

As ANEC, the European consumer voice in standardisation explains:

“The reference to the conformity assessment regimes contained in the existing product legislation/Union harmonisation legislation has the consequence thatthe majority of AI consumer products such as toys or connected appliances would undergo only the manufacturer self-assessment, even if posing a high-risk to consumers.This is because existing product legislation usesconformity assessment modulesthat were developed forthe type of risks addressed by suchsectorallegislation(eg: chemical, mechanical, etc) and therefore do not includerisks posed by AI.”

Another area of contention is the scope of prohibited AI, specifically on face recognition and biometric surveillance more broadly. In a Joint Opinion on the AI Act, the EU’s privacy watchdogs – the European Data Protection Supervisor, which is responsible for ensuring the EU institutions themselves stick to the EU’s data protection rules, and the European Data Protection Board, the bloc’s network of national privacy regulators – called for a general ban on any use of artificial intelligence technologies to recognize human features in public places. This includes faces, gait, fingerprints, DNA, voice, keystrokes and other biometric data.

Next steps

The European Parliament and the Member States will need to adopt the Commission’s proposals on a European approach for Artificial Intelligence and on Machinery Products in the ordinary legislative procedure. This could take a while. Once adopted, the Regulations will be directly applicable across the EU.

Responses to the proposal

AccessNow, EU takes minimal steps to regulate harmful AI systems, must go further to protect fundamental rights
AlgorithmWatch, The European Commission’s proposed regulation on Artificial Intelligence a major step with major gaps
ANEC, ANEC commentson the European Commission proposal for an Artificial Intelligence Act
EDRi, From ‘trustworthy AI’ to curtailing harmful uses: EDRi’s impact on the proposed EU AI Act
Fundacja ePaństwo, AI Regulation Proposal. What’s on the Plate?

We will continuously update this list.

Resources

EDRi is maintaining a document pool on Artificial Intelligence and Fundamental Rights with analyses from EDRi and member organisations, as well as legislative doucments, European Parliament studies, and other useful resources

Campaigns

ReclaimYourFace is a European movement that brings people’s voices into the discussion around biometric data used to monitor populations.

 

Recent articles
Making Regulation Work: €4m commitment to challenge harms and secure accountability over the use of AI

We explain why we think making regulation work is so important and how our new programme plans to use it to challenge AI harms felt on the ground and set precedents for responsible innovation that benefits people and society.

Placing public interest – not Big Tech – at the heart of AI & health policies: Interview with Fanny Voitzwinkler and Elise Rodriquez from the Global Health Advocates (GHA)

In a conversation with our grantee Global Health Advocates (GHA) we explore public interest work on AI and health equity and how to build capacity for organisations in this space.