What we’ve learned from the 2022 Open Call
Catherine Miller, Director European AI Fund
At the start of November we announced the 14 new organisations that the European AI Fund will support to do policy and advocacy work around AI over the next two years. We’re really excited about the work they have planned. But announcing these new grantees only tells part of the story. Here we want to share more about what we’ve learned from our latest call for proposals – what we’ve found out about the field, some reflections on the process that might be helpful to other funders and some observations that will hopefully be useful to future applicants.
The European AI Fund exists to promote a diverse and resilient civil society ecosystem working on policy around AI. We were really encouraged to receive 143 applications from across the whole of Europe – a striking change from our 2021 Open Call which was dominated by responses from the UK, Germany and Belgium. We also got proposals from organisations with a wide range of backgrounds – not just those that had previously focused on issues around technology.
This is a positive sign that social justice organisations working on issues like migration, criminal justice or poverty recognise the ways that AI is shaping their field and the need to advocate on behalf of their communities. But it was striking that some of the areas where AI has the most potentially harmful impacts such as racial discrimination or climate destruction were poorly represented in the applications we received. We will work with our partners to understand these gaps more and what we can do to address them.
We also noticed a shift in the kinds of policy work organisations want to do. While the EU legislative agenda remains packed and still demands a lot of attention from civil society, as new regulations come into force, organisations want to ensure that they are implemented and enforced in their own geographies. And there are opportunities to shape the debate elsewhere: challenging the use of AI in welfare systems for instance was the focus of a number of applicants. Reflecting this evolution, we saw a welcome diversification of policy approaches, including investigations, technical audits, design and litigation. We believe this range of skills is essential to making sure civil society can drive change across the full spectrum of policy, not just legislative development in Brussels.
Many of our applicants were extremely small and new organisations. A number have achieved great impact based on volunteer energy and commitment alone – but are now running on empty. We were pleased to include some of these groups among our new grantees but don’t have enough budget or resources to fund as many of them as we’d like. If we want to achieve the resilient ecosystem we strive for, it’s vital that we support more to gain the capacity and stability to sustain their work over the long term.
We’re pleased that our final selection of grantees reflects some of the diversity of this ecosystem, but with only 1 in 10 of our applicants receiving funding we know it’s not enough. In the next year we plan to offer more, smaller grants so that we can better meet the appetite among organisations to work in this space.
The selection process
We are a young fund and this was only our second Open Call. Unlike bigger established foundations, we had the freedom to design the process in the way we wanted to. And there were a few things we wanted to prioritise.
- Let’s not waste people’s time. We think civil society should use its time and talents to change the world, not write proposals. In our first Open Call only 7% of applicants received funding. We didn’t want organisations to spend weeks on an application that might have little hope of success. Our first stage was a short concept note, with no detailed budget or background documentation required. We received 143 applications and only asked 32 to send a second stage, more detailed application.
- Let’s not make people guess. It really wastes people’s time if they apply for funding that’s just not right for them. Often organisations are too afraid to ask. We held seven Open Door sessions during the application period where organisations could get more information and clarification about the funding. And it really helped us as a team understand if we’d communicated the Call properly. In the end we received very few applications that were totally off the mark so we think this strategy worked.
- Let’s not make people guess (again). Our selection committee and external experts brought a huge amount of consideration and engagement to their assessments. We didn’t want all this great thinking to go to waste. When we invited applicants to the second stage, we shared a summary of the strengths and weaknesses the committee had identified and let them know if we had specific issues we wanted to clarify so they knew what to focus on in their full applications.
- Let’s level the playing field a bit. Our goal is to grow the ecosystem and that means we need to welcome as many organisations into the space as possible. But we know established civil society organisations have dedicated fundraising capacity while volunteers are writing proposals late at night once their work and family commitments are over. In the second round we paid for a consultant to help nine smaller organisations draft their applications. We hoped this would ease the load a little and that, even if they weren’t successful, they would at least have learned something along the way. In the end, three of the mentored organisations were awarded grants.
- Let’s be thorough but swift. It’s tough to apply for funding and then wait months to know if you are going to get it or not. Especially working on policy around AI, events move quickly and organisations want to be able to act immediately. At the same time, we want to give applications the consideration they deserve and to get input from our funding partners and external experts. We managed to run the assessments in just over two months, with grantees getting their contracts three months after the application submission date. But this put a huge strain on the fund team and our partners to provide feedback to tight deadlines. In future we will design the process with a little more breathing room for all involved and think more about where external input will be most useful.
We’ve had some great feedback. One of our new grantees who’s currently battling their way through another funding bid, said he asked himself why all applications couldn’t be like the European AI Fund. 😍 But of course not everything went to plan.
We are still working on how we articulate the nature of our funding. We want to give organisations capacity to do policy and advocacy and be flexible to the changing landscape we operate in, so we don’t offer project funding. At the same time we want to understand how organisations have come up with the amounts of money they’re requesting so we asked for budgets. Applicants were understandably confused that our offer of core funding required a budget breakdown against different cost areas. We’ll continue thinking about how we address this in our future processes.
We also found that in the assessments we often felt we were comparing apples and oranges. Should we support the big organisation with a track record of policy impact, the newcomer to the space or the volunteers that are struggling to professionalise? It’s hard to weigh up the relative merits of such diverse organisations. In future we may group responses into different clusters so that we’re comparing like with like.
We received amazing applications. It’s a cliché to say that we had very difficult decisions to make… but we really did. We were completely in awe of the depth of commitment of organisations to fighting for the public interest and the engagement and ingenuity they bring to their respective fields. But there are some ways that organisations could reflect this better in their applications.
One striking issue that cropped up again and again was that applicants didn’t really explain what the problem was they were trying to solve. Many proposals were built on a set of implicit assumptions that AI is generally bad – and that there’s consensus around that. But the issues are much more nuanced and it’s important to be specific about where and how AI can work against the public interest and why that matters.
Relatedly, many organisations didn’t clearly articulate what change they want to bring about. There’s a tendency to describe activities and processes without explaining what they are supposed to achieve. Organisations also need to think more deeply about who their audiences for policy and advocacy work are – which decision makers have the power to make the change you want to see and how do you reach them and get them onside.
Finally, and surprisingly, lots of organisations didn’t tell us why they need the money. While it’s great to hear how successful organisations have been in their work, there’s a danger that you give the impression that everything is swimming along just fine. We want to make sure that the funds we give out make a real difference. If you are going to do the work anyway, there’s not much point in using our budget for your organisation.
We continue to think about our practices and welcome both support and challenge to make our work better. We’ve invited applicants to share their feedback via an anonymous survey. If you have ideas for improving how we fund, please let us know on firstname.lastname@example.org