The EU AI Regulation – comments and analsysis from organsations we support

It’s been a busy few weeks for tech policy in Europe. In April, the EU Commission unveiled the world’s first-ever legal framework on Artificial Intelligence.

Even though the proposal contains some major improvements over previous drafts – notably ban of some uses of AI that lead to unacceptable risks – the response from civil society on the proposal has been luke warm. As the legal experts Michael Veale and Frederik Borgesius have summarised it  in their in-depth analysis: “big loopholes, private rulemaking, powerful deregulatory effects”

Two key concern is that the proposal lacks protection from discrimination and mass surveillance. EDRi, together with 60+ human rights organisations and 116 MEPs, asked the European Commission to follow through on their promise of creating a truly people centred AI regulation:

“The majority of requirements in the proposal naively rely on AI developers to implement technical solutions to complex social issues, which are likely self assessed by the companies themselves. In this way, the proposal enables a profitable market of unjust AI to be used for surveillance and discrimination, and pins the blame on the technology developers, instead of the institutions or companies putting the systems to use.”

As ANEC, the European consumer voice in standardisation explains:

“The reference to the conformity assessment regimes contained in the existing product legislation/Union harmonisation legislation has the consequence thatthe majority of AI consumer products such as toys or connected appliances would undergo only the manufacturer self-assessment, even if posing a high-risk to consumers.This is because existing product legislation usesconformity assessment modulesthat were developed forthe type of risks addressed by suchsectorallegislation(eg: chemical, mechanical, etc) and therefore do not includerisks posed by AI.”

Another area of contention is the scope of prohibited AI, specifically on face recognition and biometric surveillance more broadly. In a Joint Opinion on the AI Act, the EU’s privacy watchdogs – the European Data Protection Supervisor, which is responsible for ensuring the EU institutions themselves stick to the EU’s data protection rules, and the European Data Protection Board, the bloc’s network of national privacy regulators – called for a general ban on any use of artificial intelligence technologies to recognize human features in public places. This includes faces, gait, fingerprints, DNA, voice, keystrokes and other biometric data.

Next steps

The European Parliament and the Member States will need to adopt the Commission’s proposals on a European approach for Artificial Intelligence and on Machinery Products in the ordinary legislative procedure. This could take a while. Once adopted, the Regulations will be directly applicable across the EU.

Responses to the proposal

AccessNow, EU takes minimal steps to regulate harmful AI systems, must go further to protect fundamental rights
AlgorithmWatch, The European Commission’s proposed regulation on Artificial Intelligence a major step with major gaps
ANEC, ANEC commentson the European Commission proposal for an Artificial Intelligence Act
EDRi, From ‘trustworthy AI’ to curtailing harmful uses: EDRi’s impact on the proposed EU AI Act
Fundacja ePaństwo, AI Regulation Proposal. What’s on the Plate?

We will continuously update this list.


EDRi is maintaining a document pool on Artificial Intelligence and Fundamental Rights with analyses from EDRi and member organisations, as well as legislative doucments, European Parliament studies, and other useful resources


ReclaimYourFace is a European movement that brings people’s voices into the discussion around biometric data used to monitor populations.


Recent articles
FAQ: Global Fellowship Programme on AI & Market Power

Our FAQ on the Global Fellowship Programme on AI & Market Power helps prospective applicants to navigate the application process easier.

Interview with #ProtectNotSurveil coalition: “Meeting Human Mobility with Care, not Surveillance”

Our team caught up with #ProtectNotSurveil coalition to learn about experiences of people on the move and the deep interconnections between technology and migration.