Who we are
The European AI Fund is a philanthropic initiative to shape the direction of AI in Europe. We are a group of national, regional and international foundations working in Europe with the aim of strengthening civil society and deepening the pool of experts across Europe who have the tools, capacity and know-how to catalogue and monitor the social and political impact of artificial intelligence (AI) and data-driven interventions and hold those responsible to account.
Our long-term vision is to promote an ecosystem of European public interest and civil society organisations working on policy and technology, based on a diversity of actors and a plurality of goals that represents society as a whole. This means that we want to:
- Strengthen civil society and deepen the pool of experts across Europe who have the tools, capacity, and know-how to catalogue and monitor the social and political impact of AI and data-driven interventions and hold those responsible to account.
- Have more civil society organisations, both those already working on AI/machine learning (ML) and those not yet working on the topic, shape the direction of AI and its uses.
- Empower civil society organisations to participate in the development of positive future visions that can be achieved with the help of AI technology.
- Ensure that these civil society partners are hosted in stable organisations and institutions.
- Be more and better connected in order to learn from each other.
The members of the European AI Fund envision a world where AI serves the interests of individuals and society, and where the policies and funding that regulate AI champion equity, fairness, and diversity, as well as democracy and human rights. The pursuit of this vision includes addressing the role AI may play in entrenching or amplifying structural forms of discrimination and exclusion, such as racism or gender prejudice, social or cultural bias.
The fund’s purpose is to promote this vision by funding and building an ecosystem of European public interest and civil society organisations working on policy and technology, balancing a diversity of actors and a plurality of goals that represents society as a whole. In the debate over AI rules, the voice of civil society often gets lost. This is especially the case for groups that represent marginalised communities, who are often disproportionately affected by AI risks and harms.
Operationally, the Fund is committed to using its resources to build a more equitable AI and society ecosystem. Equity, at its heart, is about removing the barriers, biases and obstacles that impede equal access and the opportunity to succeed. We try to eliminate structural barriers that have traditionally excluded organisations from access to funding. We also aim to involve civil society by including people from a wide range of backgrounds and lived experience in our decision-making and at our events.
The Fund’s mission statement is operationalised through a Monitoring and Evaluation (M&E) framework. The M&E framework will help us track and assess the results of the Fund’s interventions throughout the lifecycle of a program. It is a living document that should be referred to and updated on a regular basis.
This fund aims to strengthen civil society’s ability to take on the crucial role of being a visible and effective voice in public and policy debates on the form and shape that Europe’s digital transformation should take. Our goal is to bring new actors into the debate, especially those working on issues affected by AI and ADM who want to build their capacity in this domain.
Some of the ethical questions and human rights challenges associated with AI and other algorithmic decision-making systems (ADM) include, but are not limited to:
- Pervasive state surveillance to monitor and control the behaviour of individuals and groups (e.g., police equipped with facial-recognition tools or the automated analysis of mass communications data by national security agencies).
- Manipulation of democracy (e.g., automatic generation and micro-targeting of fake news and propaganda, foreign interference on social media platforms, deep fakes).
- Discrimination through automated decision making. AI relies on computational models, data and frameworks that can reflect and amplify existing biases, resulting in biased or discriminatory outcomes. This risk requires particular attention when AI is used by the public sector (e.g., to determine access to social security benefits based on automated scoring or determining the risk of reoffending in the justice system).
- Profiling of individuals by private companies. By unleashing AI on vast troves of personal data, people are profiled according to certain aspects of their lives (e.g., in AI-powered job recruitment, access to banking and insurance, targeted advertising). This also encompasses the incentivization of data collection, storage and sharing without meaningful consent, and the conversion of non-personal, non-sensitive data into sensitive data.
- Monopolization and concentration of power whereby the capture, use and deployment of data creates an ecosystem that results in market monopolies that drive down standards.
Underscoring these ethical questions is a context of limited liability and accountability when harm is caused in any of these scenarios.
Recognising these challenges, the European Commission has ambitions to forge a plan for Europe’s digital transformation, including AI. But without strong civil society participation in the debate, Europe and the world risk missing key opportunities to better society, instead choosing a path paved with societal harm.
Europe is well placed to lead the world. Its General Data Protection Regulation (GDPR), and forthright approach to competition regulation both point to its ability to set global norms. But if Europe is to effectively lead the way, it needs to address its knowledge gaps. Deep technical knowledge of AI is scarce and mainly concentrated in large technology companies like Amazon and Google. Moreover, technology and its capabilities are beset with hype and jargon, making it hard for lay people to navigate the terrain and challenge misleading claims. Finally, most of the research conducted on the societal impacts of AI (e.g., around racial discrimination) is based on applications and examples from the United States.
About the European AI Fund
The fund is supported by the Robert Bosch Foundation, Charles Stewart Mott Foundation, Fondation Nicolas Puech, Ford Foundation, King Baudouin Foundation, Luminate, Mozilla Foundation, Oak Foundation, Open Society Foundations and Stiftung Mercator. We are a group of national, regional and international foundations working in Europe that are dedicated to using their resources—financial and otherwise—to strengthen civil society and deepen the pool of experts across Europe who have the tools, capacity and know-how to catalogue and monitor the social and political impact of AI and data driven interventions and hold them to account.
“In the debate over AI rules, the voice of civil society often gets lost. We believe that the broad interest of the public needs to be part of the European AI debate. Our goal is to give both financial and organizational support to institutions that have traditionally stood up for the public interest — such as consumer rights organisations, racial and economic justice organisations or labour unions — to boost their in-house AI expertise.”
Swee Leng Harris, Principal at Luminate, and Mark Surman, Executive Director of the Mozilla Foundation – Co-Chairs of the Fund’s board
“This fund supports crucial work at a crucial time. To steer the direction of Europe’s digital transformation we need a strong, diverse and effective ecosystem of civil society organisations. The conditions created by Covid have made this task more important and more urgent than ever.”
Frederike Kaltheuner, former Fund Director
The European AI Fund is hosted by the Network for European Foundations (NEF) and is based in Brussels. NEF is an association of leading European foundations, dedicated to strengthening philanthropic cooperation. NEF initiatives are open to foundations interested in joining forces in different strands of work (democracy, social inclusion, international development).
The AI Fund’s governing body, the Steering Committee, is composed of the AI Fund’s partner foundations. The Steering Committee has two co-chairs. All participating donors are also encouraged to participate in the working goups that guide the selection of the grants.
The European AI Fund is managed by a Fund Director, Catherine Miller, who acts as the liaison between NEF, the fund’s grantees and its donors, as well as the Programme Manager Alexandra Toth. The Steering Committee is led my Mark Surman, Executive Director of the Mozilla Foundation, and Becky Hogge, Program Officer, at the Open Society Foundations.
Address & Contact Info