Interview with Jurriaan Parie, Samaa Mohammad and Ariën Voogt from Algorithm Audit: “Defining responsible AI is up to all of us”.

Algorithm Audit is a European knowledge platform for AI bias testing and normative AI standards, and one of our grantee partner organisations. We spoke with Jurriaan Parie, Samaa Mohammad and Ariën Voogt about why algorithm auditing is important, how the organisation approaches it, and about the recent growth of Algorithm Audit. Follow Algorithm Audit on LinkedIn to stay up to date on activities they offer to their community, including knowledge-building webinars.

Jurriaan Parie is director and board member at Algorithm Audit. Previously, he worked as an AI validator in industry.

Samaa Mohammad is a board member at Algorithm Audit. Besides, she works as a digital ethics specialist in industry.

Ariën Voogt is a board member at Algorithm Audit. Besides, he pursues a PhD in philosophy and theology at the Protestant Theological University Amsterdam.

 

The photo features Jurriaan Parie (right), Samaa Mohammad (middle) and
Ariën Voogt (left), board members of Algorithm Audit.

Why do you audit algorithms and what kind of algorithms have you recently audited?

We review algorithms to bridge the gap between the qualitative requirements of law and ethics, and the quantitative nature of AI. It is for instance unclear under what circumstances proxy-variables for protected characteristics, such as ethnicity, can justifiably be used. Additionally, there are numerous questions around the selection of quantitative metrics for measuring algorithmic fairness. By bringing together experts and diverse stakeholders in independent normative advice commissions we provide answers for specific cases.

Recently, we have published such case-based normative advice for various machine learning-based risk profiling methods. This advice includes explainability requirements and identifies ineligible input variables to mitigate indirect discrimination, such as avoiding differentiation based on SIM card type and ZIP code as these variables serve as a proxy for ethnicity. Furthermore, we have published advice in which an advice commission believes no higher-dimensional forms of indirect bias occurs in a BERT-based disinformation classifier for Twitter data. Currently, we are reviewing the fairness methodology of a YOLO computer vision blurring algorithm. We are committed to our non-profit perspective because we believe that normative standards for AI bias testing should be collaboratively developed with societal stakeholders and should not be dictated by private AI auditors.

In your audits, do you ever conclude that algorithms are “ok” to use as they are?

Certifying algorithms is impossible due to the absence of established criteria to consider an algorithm as “ok”. To certify algorithms many dimensions should be taken into account, such as non-discrimination regulations, data processing laws and technical aspects. Ongoing ISO and CEN-CENELEC standardization activitities, integral to the upcoming European AI Act, are a promising development to develop harmonized technical standards for AI. Nonetheless, technical standards alone cannot fully address the normative dimension of AI. No silver bullet exists to define the difference between differentiation and discrimination. Algorithm Audit is therefore actively participating in national and European standardization body working groups to advocate for the protection of fundamental rights within technical AI standards. This implies in our view the incorporation of mandatory stakeholder panels if AI bias is assessed. We believe decision-making about responsible AI needs to be transparent and inclusive, and should be resolved in democratic sight. Defining what constitutes responsible AI is not exclusively up to technical experts and private AI auditors. It’s up to all of us.

Can you tell us what is “algoprudence” and why does it matter?

Algoprudence is case-based normative advice for ethical algorithms. It results from independent normative advice commissions that provide concrete answers to pressing issues that arise in real-world algorithms. In these commissions diverse stakeholders are presented, such as people subjected to the algorithm, algorithm developers and representatives from civil society. Algoprudence matters because the case-based advice issued serves as a starting point for further public debate on normative standards for AI. Others can:

  1. Learn from our algoprudence: to enhance the collective learning process on responsible AI;
  2. Challenge our algoprudence: to spark the discussion on AI in democratic sight;
  3. Reuse our algoprudence: to harmonize the resolution of ethical AI issues.

In this way we contribute to a democratic and participatory debate on the type of AI we want as a society.

How can people engage in understanding and shaping design of algorithms used around them?

People can learn more about algorithms that are used around them by reading our algoprudence on normative decisions that underpin data modelling practices. Through our online knowledge sharing webinars, people can delve into current responsible AI topics, such as the opportunities and limitations of Fundamental Rights Impact Assessments (FRIAs) and our open source technical AI auditing tools to detect and mitigate bias.

People can help shaping the design of algorithms by using our algoprudence as a normative standard for AI. Additionally, people can contribute by submitting cases for review by our independent normative advice commissions and by participating in our CEN-CENELEC standardization activities. We are growing an international AI auditing community on Slack for co-writing technical standards and to further develop our open source AI auditing tools. Feel free to reach out if you are interested in contributing.

Algorithm Audit has recently grown and professionalised. What has helped in this journey and what challenges have you faced?

Our expansion is driven by diverse factors. Over time, we have established a national and international network comprising policy-oriented and tech-savvy AI experts. Combining our quantitative and qualitative understanding of AI enables us to provide valuable insights to supervisory institutions, industry actors and politicians. We are actively reaching out to AI experts from various professional backgrounds to identify relevant ethical issues for our case reviews. Moreover, being part of the European AI & Society Fund’s ecosystem enhances our credibility in the international AI auditing community, facilitating the promotion of our work and the sharing of our bottom-up AI auditing experiences from The Netherlands.

As we now ascend to higher tiers of the (inter)national AI policy sphere, it can sometimes be a challenge to keep up on all fronts. Delivering high-quality work when it comes to our core activities has the highest priority for us, so other work such as PR and communication is lagging behind.

What books or podcasts that you have enjoyed would you recommend to our readers?

Podcast:

  • The London Review of Books’ Past Present Future series explores the history of ideas from politics to philosophy, culture to technology, including an episode on AI.

Books:

Recent articles
Launching our new Build & Breakthrough initiatives to meet the AI challenge

The European AI & Society Fund is launching Build & Breakthrough, two new initiatives to empower public interest advocates across Europe to drive the agenda on AI. Bringing an additional €10m into the field over the next 18 months, we aim to scale philanthropy’s response.

Announcing €2m further grantmaking to support work on AI & Society  

We are pleased to be able to support 21 organisations within our community with further funding until December 2025. This commitment totals over €2 million and will ensure that diverse public interest organisations have the capacity to shape Artificial Intelligence.