Interview with Matthias Spielkamp, AlgorithmWatch



AlgorithmWatch is the leading civil society organisation in the field of social impact of automated decision-making (ADM) and AI-based systems. The mission of AlgorithmWatch is to protect peoples’ rights and strengthen the common good in the face of increased use of algorithmic systems. This is done by analysing effects of ADM / AI-based processes on human behaviour, pointing out ethical conflicts and explaining the characteristics and effects of these complex processes to a general public. To maximise the benefits of ADM / AI for society, AlgorithmWatch builds coalitions with different communities and disciplines, as well as co-develops ideas and strategies to achieve the responsible and benevolent use of ADM / AI-based systems – with a mix of technologies, regulation, and suitable oversight institutions.

We talked to Matthias Spielkamp, Co-Founder and Executive Director of AlgorithmWatch to find out more about their work.


You argue that algorithmic decision making should strengthen – rather than threaten – autonomy, fundamental rights, democracy, and the common good. It should benefit people and society, not harm them – and should benefit all of us, not just a few. How are you working to make that possible?

Matthias: Many tech companies are claiming that so-called Artificial Intelligence (AI) is some kind of magic technology that will help us overcome all of humanity’s ills, from hunger to injustice and climate change. This, of course, is hogwash. Technology does not exist in a vacuum, its creation, design and impact always depends on the context in which it is developed and used. At the moment, more and more actions that used to be confined to humans are transferred to what we call automated decision-making (ADM) systems. They range from credit scoring – determining who will receive a loan or get a post-paid mobile phone plan – to content moderation and recommendation on social media platforms, to the management of social welfare payments by governments.

There’s an opportunity to use these systems wisely, to augment and improve decisions, and to offer better services. But at the same time there is a grave risk that people are harmed in the process. Statistics replace judgment, redress and contestation mechanisms disappear into a nightmare of Kafkaesque online systems to deflect responsibility and refuse accountability.

We need to devise mechanisms for better development, but also oversight of these procedures. This ranges from impact assessment tools that help public servants develop better systems, to guidelines for developers and users. But it also means that there need to be clear laws that define what is acceptable to us in democratic societies, and the enforcement mechanisms to follow through in case companies or public authorities break those rules.


What is an issue that you focus on currently? Where do you think there is the greatest opportunity to make change?

Matthias: We are focusing on the European Union’s Digital Services Act and AI Act, two new laws supposed to protect peoples’ rights vis-à-vis large platforms like Facebook, Youtube and Amazon, and everyone who uses “AI”-based systems. Because both laws will apply directly in all Member States of the European Union, we fight for the best possible outcome: How can they protect human rights, but also collective interests – for example the environment for a debate where everyone can have a voice instead of being harassed, intimidated and silenced?


What are the biggest challenges you encounter in your work?

Matthias: One is the strategy to perplex. It’s used mainly by companies who argue that their technologies are so complex and hard to understand that no one except themselves can really understand what they are doing and therefore defy attempts of regulation. This is nonsense. If we find out that “AI”-based skin cancer detection produces a lot worse results for People of Color than for white people, it’s not our responsibility to understand why this is the case. It’s the developer’s responsibility to develop a system that does not discriminate.

But another challenge is the risk to over-simplify. We tend to look at technology as separate from society. This is a trap. For example, when biometric recognition systems fail to recognize People of Color with them same accuracy as white people and we then develop more accurate recognition systems – instead of prohibiting the use of such systems for surveillance purposes – we may end up with a more effective surveillance regime. This is the flip-side of tech-solutionism, it’s tech-regulationism that’s blind to the interdependencies of a society.


Despite raising the issues that Algorithm Watch works on, algorithm decision making also holds great promise – what is your vision for creating a positive impact?

Matthias: On the one hand, the discussion about automating decisions has taught us a lot already about our understanding of the world: We still don’t know what human intelligence is, but the quest to emulate has given us a lot of insights into how the human mind works. On the other hand, we have developed tools that can in principle make the lives of many people better by automating tedious tasks, or by better detecting and treating illnesses. But if we fail to prevent that all the value created from it ends up in the pockets of a few megalomaniac billionaires who then decide to spend it on flying to the moon in their private spaceships, we are pretty much doomed.


Recent articles
Interview with Sam Jeffers from Who Targets Me: is generative AI changing election campaigns online? 

The European AI & Society Fund spoke with Sam Jeffers from Who Targets Me about how generative AI is changing online political advertising and campaigns

Interview with Nani Jansen Reventlow, the Founder of Systemic Justice: Making litigation accessible as a tool for change.

We caught up with Nani Jansen Reventlow from Systemic Justice about what it means to support community-driven litigation and what Systemic Justice has learned about how communities experience technology-enabled harms.