Announcing our funding strategy for 2022

In the summer of 2021 the European AI Fund has commissioned research to identify further funding opportunities. Based on these recommendations, we have agreed to pursue a two-fold strategy:

  • Refining our current funding strategy by filling gaps and adding bridge-building organisations to our existing open call cohort.
  • Expanding our current funding strategy to cover additional functions of the AI and Society ecosystem, in particular around research, as well as developing alternative visions for Europe’s digital transformation.

Refining our current strategy

We want to continue to strengthen the policy and advocacy function of the AI and Society ecosystem to make sure civil society is well-resourced to advocate on the AI Act and related regulations with far-reaching implications on AI

We also want to make sure other crucial ecosystem functions, especially on transversal issues can (continue to) build up tech capacity so they are ready to act as oversight once the AI Act and related regulation is in place

Given the fund’s long-term vision, it is crucial that the EU will adopt a strong AI Act that protects both fundamental rights and enables society to make use of technology in a way that benefits everyone equally.

At the same time, the inclusion of equity and anti-discrimination measures in the public and policy debate on AI harms shows just how important it is to support new voices that bring broader human rights and social justice perspectives to the table. This will connect top-level policy debates to the lived experience of those negatively impacted by AI and ensure these fundamental critiques do not fade away in the rhythm of political discussion.

Further cross-sectoral cooperation can be seen on topics such as AI’s impact on access to justice, environment and sustainability and labour rights. These contributions by social and racial justice organisations show that key transversal topics emerge through collaboration between existing and new voices in the AI debate that hold agenda setting, watchdog, research and bridge-building expertise.

Expanding our current strategy

We want to support actors that can envision alternative futures for what Europe’s digital transformation can look like.

We found that although demonstrating harm is important, it is insufficient on its own and civil society also needs to be creating alternative models for how to do things democratically.

Supporting experimental approaches that root themselves in ethical and social justice challenges to create alternative technical, policy and social visions and ideas of what we want AI to look like requires investing in thought leaders, agenda setters and builders of alternative futures who have community and technical expertise.

This is not limited to thinking about public interest technology but should also focus on an alternative to the dominant logic that AI is needed to streamline processes, increase efficiency and reduce costs. Not only is it far from clear that AI delivers these results, but this management theory also fails to start with the lives of people at its centre. Resources should be devoted to understanding areas where AI is not currently in use. What can be learnt from where AI technology is not being deployed within government and there is less of a role for the private sector? This could be an important point of comparison to help in the development of alternative, positive models and visions. As was said in one interview, “we really need big thinking on a new management theory that challenges the logic and model of AI streamlining processes, making things more efficient and cheaper.”

We also want to bring greater transparency and accountability to 20 billion that Europe is planning to invest in AI annually.

So far, our funding has primarily focussed on strengthening the policy and advocacy function of the AI and society ecosystem. While other functions will be needed once the AI regulation is in place, there is already an urgent need for civil society to act as a watchdog to the money that is invested in Europe on AI, in particular on public sector use, consumer AI, industrial AI and infrastructure.

At the same time, Europe doesn’t have the investigative reporting capacity on technology and AI that has emerged in the US through such organisations as The MarkUp. Investigations by journalists and civil society into how technology works and the harms it creates or leads to are essential to building strong community groups and empowering affected communities. Likewise, storytelling can bring to life the potential consequences and opportunities for individuals and communities. Drawing separations between issues affecting technology and society is no longer realistic.

The public sector has a unique influence on people’s lives. Evidence from research, such as the Fragile Families study mentioned earlier, can be vital in showing where there is no legitimate case for the deployment of AI systems (child welfare is one example).

Recent articles
Interview with Sam Jeffers from Who Targets Me: is generative AI changing election campaigns online? 

The European AI & Society Fund spoke with Sam Jeffers from Who Targets Me about how generative AI is changing online political advertising and campaigns

Interview with Nani Jansen Reventlow, the Founder of Systemic Justice: Making litigation accessible as a tool for change.

We caught up with Nani Jansen Reventlow from Systemic Justice about what it means to support community-driven litigation and what Systemic Justice has learned about how communities experience technology-enabled harms.