Supporting Technical Expertise to Protect Digital Rights

Reflections from our conversation at Rights Con 2023, moderated by Alexandra Toth (European AI & Society Fund), with participation by Florian Christ (Mercator Foundation), Chiara Giovannini (ANEC), Gemma Galdon Clavell (Eticas), Jurriaan Parie (Algorithm Audit).

 

Civil society organisations require increasingly technical knowledge to navigate the AI policy world and be effective advocates. What technical expertise is already there, why it is needed and what are the gaps that funders can help to address? The conversation hosted by Stiftung Mercator and the European AI & Society Fund highlighted the following themes.

1. Power-asymmetries and the urgent need for civil society representation in standard setting

Regulating artificial intelligence is a complex political and normative process, and requirements set forth by laws such as the European Artificial Intelligence Act, are only one side of the story. The other side is how the legislation delegates the task of fleshing out the technical details of the requirements to standardisation. After the EU AI Act is adopted, standard setting bodies are expected to develop detailed technical specifications for the legal requirements, such as for the quality and risk management by AI providers, for the auditing of AI systems, and for the design of high-risk AI systems (such as surveillance systems deployed on the borders and AI used to aid HR decisions and manage workplaces). Essentially, standardisation is what makes it possible to assess whether the companies are complying with the regulation and to approve AI systems to be put on the market. But, as Florian Christ from Stiftung Mercator noted, in this process standardisation grapples with complex societal questions such as ‘How to measure fairness?’ which are not easy to answer let alone set technical parameters around them.

Chiara Giovannini from ANEC, the European consumer voice in standardisation, explained that participation of all stakeholders in the work of standard setting bodies is encouraged, but in practice industry has a better position to engage, since they have the economic interest and resources to participate.

ANEC is a partner in standardisation processes and is mandated by the European Commission to participate in European standardisation bodies, to provide a perspective defending consumer interests, often conflicting with the industry point of view. ANEC, however, does not have voting rights, so they can only express an opinion. Besides ANEC, civil society is overall heavily underrepresented. Chiara suggested that providing resources for civil society organisations to be able to participate on a national, European, and international level is the type of support that is needed the most. The international level matters, because convergence between standards is strongly encouraged, and big ICT companies have a lot of influence, especially on this level.

Corporate capture is not unusual in standard setting structures, and they are often dominated by the industry from the US and China. The same companies engage in multiple standardisation bodies, delegating several representatives to different national delegations and thus wielding outsized power. This further increases power-asymmetries that affect standardisation. But especially because standards will become a part of the regulatory regime and have a legal foundation and impact on the AI systems made accessible to use, civil society needs to be involved in setting those standards.

More generally, Chiara urges to rethink the standardisation process as it is now. Standards should be used for performance, but when it comes to defining bias and fundamental rights, ANEC wonders if this is the right tool.

 

2. Civil society approaches and knowledge on AI auditing

Auditing AI is another technical area, that requires specific expertise. Auditing is a way to provide oversight of what the AI providers are doing in comparison to what they say they are doing, and eventually what they should be doing by law  (in the future according to the AI Act, but already according to the GDPR which has been in place  for 5 years already).
Auditing can also be done bottom-up to unpack the components of specific algorithms on case-by-case basis.

Gemma Galdon Clavell from Eticas noted that the time when AI was thought to not need a regulation is over. Eticas has developed a socio-technical methodology that allows the interrogation of AI systems from the beginning (pre-processing) to the end (post-processing). This gives them a full picture of what’s going on within the AI systems.

Third-party audits, so called, adversarial audits, conducted independently by external parties, are also very important to unpack specific issues. Civil society plays a crucial role working with communities that have been negatively impacted by AI. Eticas is planning to publish a guide to third-party auditing to encourage organisations to conduct these audits. Through the outputs auditors can see what systems have learned and what they are using to take the decision.  She encourages social justice organisations to undertake auditing to see how technology is impacting their communities.

Algorithm Audit, a non-governmental organisation based in the Netherlands, takes a different, case-based approach to auditing algorithms. Jurriaan Parie from Algorithm Audit explains that context-specific ‘algoprudence’ (jurisprudence for algorithms) is needed to define normative standards for AI. Through their work, they ask normative questions, such as – is it ok for a specific variable to be used in an algorithm? For example, is it ok to include information about the type of sim card that customer uses, when making an automated decision about whether to allow this customer to access “pay-later” services? Here the type of sim card could possibly act as a proxy for identifying and excluding a specific group of users, since some sim cards can be used by distinct demographics in some contexts. They put together panels of experts, academics, civil society representatives and citizens to deliberate on these issues. These are the questions that technical standards cannot really answer. What is good or bad for people, and when? Jurriaan argues that certain questions on fairness should be answered publicly, rather than defined by private actors.

Because AI has many applications and purposes, it is hard to do only one type of audit and many approaches are needed. Contexts matter, and it’s necessary to look at specific contexts when conducting audits.

 

How can civil society participate in this field? Some ideas from the panelists:
  • Use available resources and develop new ones, such as auditing guides.
  • Work with media and affected groups to surface issues and raise awareness.
  • Engage in the work of national standardisation bodies, since they are the ones making decisions nationally and shaping standards at European levels.
How can funders support technical capacities of civil society organisations? Some ideas were shared among the panelists:
  • Recognise that technology is transversal, and technical expertise can have a huge impact in the fight to protect digital rights.
  • Provide financial support to civil society organisations taking part in standardisation bodies and processes on all levels.
  • The results of this work are not always very visible and therefore attractive to funders, but it does not mean that this work is not important. Without engagement of the civil society organisations, standards will be set by the industry.
  • Public-private partnerships are needed to get more AI auditing in normative domain (public funding & foundation funding).

 

Resources:

 

This blog was prepared with an input from Florian Christ from Mercator and Jurriaan Parie from Algorithm Audit.

Recent articles
Launching our new Build & Breakthrough initiatives to meet the AI challenge

The European AI & Society Fund is launching Build & Breakthrough, two new initiatives to empower public interest advocates across Europe to drive the agenda on AI. Bringing an additional €10m into the field over the next 18 months, we aim to scale philanthropy’s response.

Announcing €2m further grantmaking to support work on AI & Society  

We are pleased to be able to support 21 organisations within our community with further funding until December 2025. This commitment totals over €2 million and will ensure that diverse public interest organisations have the capacity to shape Artificial Intelligence.