Interview with David Cabo from Civio: “We are ready to go to the Supreme Court”

The Civio Foundation is an independent, non-profit newsroom based in Spain which monitors public authorities and helps citizens understand government decisions that affect them. In their work Civio combines data journalism, advocacy, and strategic litigation. The European AI & Society Fund caught up with David Cabo, Civio’s co-director, about their fight for open information about algorithms used by public institutions.

David is a software developer by training, and for the first ten years he had a (in his own words – boring) corporate career. David became interested in activism because of open data,  and now is one of Civio’s two co-directors, where he takes care of the technical/data aspects of the work. The co-director Eva Belmonte is in charge of the journalism part.

What led Civio to start working on algorithmic fairness in Spain?

As part of our regular reporting on government activities, we published some articles around 2018 about the transition to a new electricity subsidy programme for vulnerable households. It was a complicated process, with hard-to-understand forms, and we were shocked by the very low take-up rate, so we ended up building an online assistant so people could check their application and download pre-filled forms. We had questions while building the assistant, since some parts of the legislation were not well specified, and some users alerted us about inconsistencies in their application results, so we started asking for further details via access to information requests. We hit a wall on the government side, which didn’t want to share details, but we picked up the fight, which led us to uncover mistakes in the software processing the requests. Our online assistant turned out to be very popular, with hundreds of thousands of users, who were very grateful, which made us realise that there’s a huge demand for clear and practical information about these topics. Since then, through guides, articles and online tools, we’ve been doing our best to expand the number of recipients of this and other subsidies, and digitization of public services and automated decision making are growing concerns for us.

You’ve done nearly everything to make the source codes of algorithms you investigate public, from freedom of access to information requests, to court cases. Why is that so important? How has this process gone for you and what have you learned?

In our original case, the one about electricity subsidies, the software system approving or denying applications was following a simple algorithm, specified by law. We proved there were mistakes in the test scenarios used to validate the system, i.e. there were errors in the first half of the process, converting the legal text of the subsidy into a software specification. We want to check whether there are more mistakes in the second half of the process, where that technical specification is converted into an actual running system, and for that we want to see the code. In more complex systems, with machine learning, we may need access to the training data, or the training algorithm, or the resulting weights, but for our particular case, the source code is important, and there’s no reason not to open it.

But, on top of the source code for this particular system, when we launched our case we also wanted to test the legal tools we have to audit and evaluate these types of automated decision making systems, because we know there’ll be more and more of them in the future. Asking for technical specifications turned out to be too innovative for the government’s taste, but -with support from the transparency regulatory body- we eventually got them. Source code, however, was a step too far, and we had to go to court. We’re now at the appeal court level, having lost the first trial, so we’re now refining and strengthening our responses to the administration’s arguments not to share the code: intellectual property (even though the system was developed internally) and public security (i.e. showing the code would open the Ministry of Finance’s databases to hackers, somehow). Through all this, we’re learning -or rather, confirming- that this is a very slow process, and that lower courts aren’t prepared to handle technical discussions. But we need to open these black boxes, and we’re ready to go up to the Supreme Court if we need to, as we’ve done in the past with other cases. Because there we believe some of the core issues will be evaluated properly.

Spain is an interesting place to watch for people following AI regulation in Europe. A year ago, Spain launched the AI Sandbox, committing to testing technical and regulatory solutions for AI.  What role has civil society in Spain played in this process?

There’s certainly a lot of public declarations and promises, and a stated intention to have Spain as an innovator in the area, but much detail and concrete work is still missing. The AI Sandbox is not yet up and running, nor it’s very clear what it will entail. The future AI supervisory agency has been created, nominally, but its structure, personnel and concrete mission is yet to be defined. A major contract has just been awarded to an external consulting agency to support these processes.

But the opportunities for civil society to participate are, so far, limited. A coalition of diverse CSOs -specialized in topics such as migrants’, consumers’ or workers’ rights- has been created, named “IA Ciudadana” (“Citizens’ AI”). Our first goal is to demand direct communication and participation channels with the government. We just had our second meeting with Carme Artigas, Secretary of State in charge of AI, so we’ll see whether the promises to involve civil society in the future AI agency are honoured.

What are some of the things that Civio recommends to public services that want to introduce automation in their decision-making? 

We don’t have a fully specified set of guidelines or rules or processes: there are certainly more specialized organizations out there, and we’re also waiting to see what the AI Act will specify in detail. But, from our limited experience, once the possibility of automating a process is announced, we would strongly encourage public bodies to do the development in the open (e.g. analysis of the issue, technical specs…), consulting with civil society and experts from the start, when the system is just a possibility and not a running reality. But not just legal or AI experts, or some generic ethical board: they need to involve representatives from the affected communities and/or CSOs working with them, e.g. front-line social services.

Also, on the broader issue of digitization, don’t ignore and abandon the offline channels, and keep providing good human support, especially to vulnerable citizens. We’ve done reporting in Spain showing the collapse of Social Security offices and support lines, and its impact on the take-up of the guaranteed basic income, for example.

Finally, be proactively transparent and don’t fight information requests in court, surely: trust in governments and technology is low enough as it is.

Civio is unusual in combining investigative journalism and advocacy – why do you take this approach and what advice would you give to others who want to work on algorithmic accountability?

We didn’t plan it like this originally: we started doing advocacy about access to information legislation in Spain while promoting Open Data, which was the trendy thing at the time. But soon we realised we needed to bring down the transparency and data discussions down to concrete issues that affected everybody, so we started doing more and more journalism, data journalism, always using public sources and data.

Mixing advocacy and journalism has traditionally been a taboo, and we had many doubts about it, but we slowly realised that it was actually a unique strength of us: for example, after doing a public procurement investigation for many months and looking at actual data, we had detailed and thorough knowledge about certain illegal practices, or about the data that we’d need to have to find those practices. So, when the procurement law was due to be amended, it made sense for us to write down concrete proposals to improve transparency and to meet political parties to push them forward. But we do this carefully: our journalistic investigations start without a predefined agenda or outcome; we treat civil society as just one more source; and our lobbying and proposals are public and bipartisan.

What’s your current go-to book, podcast or anything else you’d like to share with our readers?

I’ve been trying to follow the news around the technical developments of AI for the last few months, and it’s been really hard to keep up to date with the constant updates and news and hype. I finally settled on a few newsletters which gave me a sensible and calm overview: from the more sceptical and cautious -Melanie Mitchell’s Guide for Thinking Humans and Arvind Narayanan’s AI Snake Oil– to the more optimistic, Jack Clark’s Import AI.

Outside AI, I’ve been listening lately to a podcast called Musing Mind. Hard to summarise, but it’s a combination of psychology, economics and social sciences. I’m very curious about the impact AI -and the Big Tech companies which control it- may have on our social and power structures, and whether we are capable of imagining alternative models together.


Recent articles
Welcome aboard, Peggye! 

Please join us in extending a warm welcome to Peggye Totozafy, our newest team member! Peggye steps into the role of Senior Partnerships Manager with a wealth of experience in impact philanthropy, ready to lead our efforts in fostering meaningful connections with partners.

Interview with Sam Jeffers from Who Targets Me: is generative AI changing election campaigns online? 

The European AI & Society Fund spoke with Sam Jeffers from Who Targets Me about how generative AI is changing online political advertising and campaigns