Announcing €2m further grantmaking to support work on AI & Society  

We are pleased to be able to support 21 organisations within our community with further funding until December 2025. This commitment totals over €2 million and will ensure that diverse public interest organisations have the capacity to shape Artificial Intelligence. 

 

This grantmaking is part of the European AI & Society Fund’s new Build and Breakthrough Initiatives, which deepens our ongoing commitment to supporting a diverse and resilient civil society ecosystem while also seizing new strategic opportunities to fight for AI that upholds our shared democratic values. 

We will support 18 organisations with a total of €1,890,000 under the Build Initiative. This funding will support those most affected by the impacts of AI, including racialised people, sex workers, people with disabilities and migrants to be active advocates in AI debates. It will enable civil society to coordinate policy action  and spearhead evidence-based policy proposals on AI accountability, rooted in lived-experiences and human rights expertise. Our ambition is to sustain the momentum that this community has built up over the past 3 years and ensure that they can leverage their expertise to shape AI in the public interest across a wide range of contexts and approaches. Read more about their work so far in our Impact Report and Annual Report 2023. 

Additionally, we will support three organisations with a total of €370,000 under the Breakthrough Initiative.  Foxglove, Gesellschaft für Freiheitsrechte and Irish Council for Civil Liberties will be the first grantees of the Making Regulation Work Programme. This aims to challenge the harmful impacts of AI on society and secure accountability over AI through the implementation and enforcement of European legislation. As part of this new programme, they will strategically use laws and regulations to challenge the abuse of power by tech corporations and governments, including automated systems that control workers, as well as setting the tone for strong enforcement of the new European AI regulation. We are continuing to develop the grantmaking strategy for this programme and aim to offer further funding opportunities through it later in 2024. Please sign up to our newsletter to stay informed as these are announced. 

While we are excited to support these organisations to continue their important work in this way, we were sadly not able to extend funding to all of our current grantees. We thank all of them for their contributions to shaping AI to serve people and society and for their generous and collaborative participation in our community.  

 

Making funding decisions is always challenging and we have reflected on our grantmaking process in a blogpost here. 

Continue reading to explore the full list of renewed grantees and find out more about the planned work. 

 

Build Initiative grantees: 

The passing of the EU AI Act is a significant milestone in the development of AI regulation, but only marks the beginning of a longer journey in mitigating AI harms and rebalancing digital power. This extension will enable Ada Lovelace Institute to focus on: 

  • Understanding implications of different models for how liability is distributed across AI value chains; identifying equitable models that shift incentives to prevent AI harms towards actors with greater capacity to address them; understanding and communicating to stakeholders what these conclusions mean for the EU AI Liability Directive.  
  • Looking at approaches to understanding public benefit from AI across disciplines, and how industrial and regulatory policy should be designed and evaluated to maximise that benefit. This is likely to include looking at what measures could make open-source approaches compatible with harm mitigation and examine how competition law/regulators are addressing market impacts. 
  • Researching and influencing the implementation of the AI Act and the development of the AI Office (eg institutional design, relationship with national AI Safety Institutes, efficacy in preventing harms/enforcement, understanding the relative value of different technical or ‘safety’ measures such as foundation model evaluations it may implement etc). 

Additionally, the extension to our grant will allow us to continue to strengthen our work together as a team by enabling Ada staff to participate in more training and development opportunities, and to come together as a team at an annual Away Day to support the development of Ada’s strategy and internal culture.  

Algorithm Audit brings together independent advice commissions and issues case-based advice to resolve urgent ethical issues surrounding the use of algorithms. The NGO conducts reviews of high-impact algorithms used in the private and public sector, with the help of independent commissions of diverse experts, stakeholders and affected communities. Algorithm Audit distils the deliberations of these commissions in an advice report which is made publicly available. These case reviews contribute to what they call 'algoprudence', a bottom-up, inclusive, and concrete way of generating norms for questions surrounding algorithmic bias, explainability, and responsible deployment of algorithms in society.  

Algoprudence is publicly available in an online case repository. The repository has various functions:  

  1. Learning & harmonization: The advice issued by independent commissions results in generalizable and publicly accessible norms, fostering the collective learning process how to deploy responsible AI.  
  1. Question & criticize: Facilitate discussion about the normative advice that is formulated by the advice commissions, by actively inviting others to give responses and to publish dissenting views.  
  1. Inclusion & participation: A diverse group of stakeholders (including citizen representatives) is included in convening normative advice commissions. In this way, our activities contribute to wider participation and shaping responsible AI together. 

Besides, the NGO develops and maintains open source AI auditing tools, such a Python library unsupervised-bias-detection and holds expertise on usage of synthetic data generation for AI bias testing. Overall, Algorithm Audit serves as a European knowledge platform for AI bias testing and normative AI standards. 

Governance of AI is transitioning from creating new regulation to enforcing it. The Digital Services Act (DSA) is in place and fully applicable; the Artificial Intelligence Act (AI Act) has been adopted, enforcement structures need to be devised and built. 
 
But, although important, the DSA and AI Act are only a few pieces of a puzzle needed to ensure a human rights-oriented perspective on AI. The AI Act with its product safety approach not only exhibits various loopholes for companies and authorities, but may not address the impact of automated decision-making systems (not qualifying as “AI”) on people. Currently, there are only limited means to defend the rights of clickworkers, to counter algorithmic injustices like discrimination, AI-based surveillance and control of migrants or the fact that development and use of AI is controlled by very few big companies. 
As a result, AlgorithmWatch will advocate for effective enforcement structures of the AI Act, and devise and test strategies to enforce the rules on the books, namely the DSA – e.g., by investigating the systemic risks platforms pose to society, using data access requests and other means afforded by the law. In addition, organisation will increase journalistic and communications activities to question the AI hype, by investigating and reporting the real-life consequences of the use of automated and AI-based systems. 

In the implementation of the European AI Act, where standards will play a key role for AI high-risk systems and sustainable and generative AI, ANEC’s objective will be to strengthen the consumer voice in AI standardization. ANEC will build the advocacy capacity of civil society organisations (CSOs) at the European level, through participating in AI standardisation, sharing expert knowledge with CSOs on contributing to standardisation, submitting comments and engaging with CSOs in standards policy and lobbying at the National level.  

ANEC’s activities will include contributions by the ANEC AI expert to CEN-CENELEC JTC 21 on AI standardisation; Working Group (WG) 1 ‘Strategic Advisory Group’, WG 4 ‘Foundational and Societal Aspects’ and the ‘Task Group on Technical Coherence’. ANEC will also be active in the ‘Task Group on Inclusiveness’ and recruit an expert to participate in the new WG5 on ‘AI Cybersecurity’.  

ANEC will reach out to EAISF grantees and other CSOs to provide updates on the new EC AI Standardisation Request and inform them of the urgent need and most efficient methods to impact the AI standardisation work, to counterbalance interests of (non-European) big tech industry with those of civil society.  Similarly, ANEC will ask that international standards are scrutinised and adapted to European values, so that companies operating in the Single Market will need to follow more stringent measures and respect fundamental rights. 

The new Spanish Minister for Digital Transformation recently presented in Congress his strategy, where AI plays a critical role as an unstoppable force for good. E.g., since citizens complain about complex administrative processes and obscure language, chatbots will be introduced to guide them. Justice and Social Security will also be transformed, according to their Ministers. Enforcement of the AI Act starts in 2026, but it’s clear that governments will not wait until then to deploy AI systems, unbounded by regulation.  

To counteract the uncritical industry and official discourse, Civio will continue monitoring administrative decisions, the development of new systems and their impact on common citizens. We’ll also follow-through on the promises made to Spanish civil society, such as participation in the recently-created AI sandbox and supervision agency, which haven’t got started yet. We’ll do this through public advocacy and face-to-face meetings, together with the growing coalition we’re helping establish.  

And, while still worried about the Digital Welfare State, we want to also look closely at the public healthcare systems, since our first exploration has shown AI systems being used without anyone having a registry of them. Scarily, just as an example, recently Madrid announced a GPT-4 chatbot for doctors to diagnose rare diseases, without any prior public discussion or evaluation. We’ll use investigative journalism with transparent methodologies, combined with our skills on access to information, litigation and public procurement.

EDRi will use the support of the European AI & Society Fund to bolster its ongoing work on AI, with a particular focus on two strands.  

The first will push for a justice and human rights-focused implementation of the EU AI Act. EDRi’s coordination has brought together EU-level civil society organisations in a coalition to advocate for the use of AI only where it truly benefits individuals, communities and society – with red lines where it does not. In this strand, EDRi will build on existing efforts to drive even more effective collaboration, to collectively explore enforcement of the Act at national level, and to mobilise the public.

The second strand will explore both legislative and non-legislative initiatives on AI with a focus on areas not (sufficiently) covered by the EU AI Act. This will include monitoring, research and advocacy in areas such as the environmental impact of AI, the digitalisation of public and essential services and the use of scoring algorithms in the public sector. It will also include AI-enabled harms in the areas of worker surveillance, policing, migration and securitisation. 

The EAISF grant will enable the sustainability of coalition efforts to represent civil society voices in AI regulation in Europe, and with global effect. Our continuous engagement, mobilisation, communications and advocacy at European level will demand accountability from decision-makers to people affected by AI systems. It will also contribute to maximising the impact of the small wins of the AI Act, as well as contesting its harms and mistakes. 

In the next phase of their “Disability Inclusive AI” project EDF will continue their advocacy work towards the EU institutions and EU Members States on all relevant initiatives related to artificial intelligence, jointly with key stakeholders, including the European Digital Rights network (EDRi) and the AI core group. We will also keep advocating towards the Council of Europe initiatives related to AI and human rights. We will coordinate and support our members’ national advocacy regarding AI. We plan to try to influence standardisation processes related to AI, and ensure its protects the rights of persons with disabilities.  

Our second priority will be to strengthen the capacity of our European and national members to work on AI policy and standardisation, including to ensure they contribute to the monitoring of the AI Act by organising trainings (online and in person for members but also for the secretariat) and developing materials that they can use on national level on topic of AI and Disability. 

Finally, we will continue to raise awareness of importance of inclusive and accessible AI for persons with disabilities, mostly by researching, analysing and communicating on AI development, risks and opportunities and how it impacts persons with disabilities. We will also create a European and national communication campaign on the needs to include persons with disabilities in AI’s design, monitoring and AI related policies. The target groups will be the disability movement, policy makers and technology experts. We will raise awareness of the UN Committee on the rights of persons with disabilities about AI. In order to achieve all of those we will continue our AI newsletter, blog post and outreach to media and we will organise a conference on AI.

With the support of the European Al & Society Fund, ENAR will continue organising activities for their members such as thematic events in collaboration with digital experts and increase their members' participation on the topic of digital rights. At the same time, ENAR strives to create room for civil society and anti-racism actors in Europe to advocate for an ecosystem that ensures racialised people's participation and protection within a strengthened fundamental rights framework: build and promote a positive narrative on digital rights and its role on mainstreaming racial justice in European societies; and strengthen and empower ENAR members and the broader civil society to support a racial justice approach to digital rights.  

The project aims to strengthen ENAR's networking and influencing impacts in pursuit of concrete advocacy outcomes that inform policies and practices of the experiences of racialised communities and civil society to ensure that technology does not harm or leave behind these citizens. Our project also aims to exchange insights and knowledge with the ecosystem to ensure that a racial justice perspective is not overlooked by its key stakeholders. Finally, we expect that the project's awareness and capacity building activities will have sustainable, multiplier effects that leave the participants with skills and tools that can be used in the future.

With the new European AI & Society Fund grant, ESWA will implement activities that align with its Strategic Plan and Theory of Change. We will focus on building the capacity of our members and sex worker community, scale up our advocacy and campaigning work, and continue to build and maintain relationships with other civil society organisations, policymakers, and the private sector to advocate for safer technologies and a human rights-based approach to designing, developing, and implementing regulations.  

ESWA will complement its work on policy by monitoring and evaluating the Digital Services Act (DSA) in order to understand the impact of this important legislation on sex workers. We will develop a new series of Sex Work & Tech Tarot Cards and conduct new community research on the impact of deepfakes on sex workers and recommendations for a human rights-based regulation of this technology. In 2025, we will also finalise our research on social media accountability, which will guide us in establishing relationships with social media platforms to promote an online environment that respects the human rights of sex workers. 

There is an absence of autistic people’s voices from the AI and tech discourse, advocacy processes and policymaking concerning current and potential uses of AI, while autistic people are among the groups most intensively targeted with AI applications. 

EUCAP’s AIRA project intends to understand the autistic people’s needs, concerns and priorities regarding AI and build the capacity of autistic advocates and autistic-led organisations in Europe to advocate for the rights of autistic people in matters involving AI research, product development, legislation and policy, at both national and European level. The project will create resources on what is AI, how is being used in the autistic community, and how to advocate and shape better AI for autistic people. We will conduct online webinars on different subjects and organise several training events in European countries to train larger numbers of autistic advocates on this subject. We hope to convey the knowledge, skills and mutual support that would enable them to approach decisionmakers, research institutions, industry representatives and others to bring autistic perspectives and knowledge into discourses where they are needed.

With the recent developments in EU migration policy, from the adoption of the EU Pact on Migration and Asylum to bilateral agreements with countries in the southern shore of the Mediterranean, one thing is clear: the EU’s efforts to externalise migration control are accelerating.  

With this project, EuroMed Rights will focus on how EU-funded technology and AI is deployed in the external dimension of EU migration policies, with potential impact on shrinking civic space in third countries. EU-funded technology is increasingly deployed in the cooperation with third countries such as Egypt, Libya, Tunisia, Morocco, Turkey and Lebanon to curb migratory flows. Through bilateral agreements, that often do not undergo democratic scrutiny, the EU and Member states sign deals for the training, equipping and maintenance of an infrastructure of border control that has a strong technological component. When deployed by fragile democracies or fully authoritarian regimes, this technology might have severe consequences on fundamental rights, from the right to freedom of expression and assembly, privacy and data protection, to the right to asylum and to leave one’s country.  

With this project, EuroMed Rights will investigate how technology and AI is used in EU migration externalisation policies and conduct advocacy and communication activities to shed light on this issue. With a network of 70 civil society organisations in 30 countries in the Euro-Mediterranean region, this project will allow EuroMed Rights to mobilise its membership around issues that are at the intersection between migration policy, fundamental rights, and freedoms.  

The grant from the European AI & Society Fund will enable Glitch to continue bringing an intersectional perspective to AI policy in Europe. This funding will enable Glitch to build on their collaborative research with the European Network Against Racism (ENAR) on deepfake image abuse of Black women, which provided insights into a relatively understudied area of AI harms. The funding will enable Glitch to consult with their community of Digital Citizens and co-create solutions with them for non-criminal routes to redress for Black women impacted by deepfake abuse. Being able to gather feedback from their community and co-create solutions will inform how Glitch and ENAR’s research can be used to develop strategy, policies, and interventions that best protect Black women. These co-created solutions will also support Glitch to advocate for preventative measures in tech development which better consider marginalised communities.  

With this funding, Glitch aims to better coordinate their efforts with others working in the AI ecosystem. Glitch aims to build up a directly engaged ecosystem of Black researchers, tech experts, and facilitators who specifically focus on the impact of AI on Black women and Black people. This funding will contribute towards the staff time of Glitch’s Systemic Impact pod to carry out this work. This funding will also support Glitch to further implement their new ‘working group’ approach, bringing in expert capacity on particular research subjects and/or research processes to help fill any gaps in knowledge within their team. 

Health Action International will address three priority areas: 

Advancing Health-centric AI policies: We will champion fundamental rights in EU regulatory files like the AI Act and European Health Data Space. Our focus is to ensure policies prioritise patients and their rights, through e.g. participation in standard-setting in the Dutch standardisation body. We will continue debunking overly optimistic narratives using evidence-based policy analysis and a fundamental rights lens to inform policymakers. 

Raise awareness on AI in Elderly Care: In the Netherlands, where questionable AI floods elderly care, we're pushing for a patient-centric approach. The current views of digital care being the aspirational goal need a rethink – AI should enhance lives, not dictate care. We press for a fairer, rights-focused approach and strong conditions for AI to be implemented in the sector. 

Strengthening community ties: Our efforts are multiplied through partnerships. We'll promote collaboration with fellow organisations and experts across the EU, boosting coordination and expanding our network. Together, we'll amplify our advocacy. 

Homo Digitalis (HD) is a digital rights civil society organisation in Greece, and a proud member of the European Digital Rights (EDRi) network. Its goal is the protection of human rights in the digital age. It strives to influence legislators and policy makers on a national level and raise awareness among the wider public regarding digital rights issues. Also, when digital rights are jeopardised by public or private actors, HD carries out investigations, conducts studies and takes legal action.  

For instance, in collaboration with allies, HD’s complaints led to the highest fines ever imposed by the Greek Data Protection Authority both to private (Clearview AI, 20 million euro) and to public (Hellenic Ministry of Asylum and Migration, 175.000 euro) entities in Greece. In 2023, with the support of the European AI & Society Fund, HD was able to hire its first ever full-time staff member, a Director on AI and Human Rights, for two years. This allowed HD to set up the framework necessary for transitioning to a full-time organisation and strategising sustainable future growth. The objective is to continue the big steps towards the transformation of HD into a strong human rights watchdog in the field of AI development and deployment. HD will monitor the enforcement of the AI Act into the Greek legal landscape, engaging with related stakeholders. Also, HD will investigate lack of compliance of AI systems deployed by Greek authorities in the fields of policing, border management and employment with the applicable rules.   

Panoptykon will focus on advocacy related to the AI Act: 

  • responsible use of big data and AI in public policy making, including introducing obligatory impact assessments for the Polish administration, 
  • effective consumer protection in AI-driven online services as state responsibility, 
  • strict safeguards (including independent oversight for law enforcement activity) and open debate before any state deployments of AI systems in policing or border governance. 

To pursue this agenda, we plan to engage in public consultations, bring these objectives on the agenda of the Council for Digital Affairs, work with Polish tech journalists and political influencers, and publish op-eds and podcasts (labelled “Panoptykon 4.0”). We want to use revision of Polish AI strategy and upcoming AI Act implementation as door openers to enter mainstream political debate. 

In parallel we will continue our watchdog activity in Poland, including monitoring preparations for the AI Act implementation on national level. 

We also plan to engage in advocacy activity proposed by peer organisations in Brussels as well as build our understanding of trends in technology and international politics. 

Following the adoption of the AI Act, there continues to be a critical need for perspectives within the ecosystem on the harmful ways that AI and other technology are used to reinforce racialised oppression, as exemplified in the migration and law enforcement fields. PICUM will continue efforts to sustain and build additional connections with diverse partners for collective work, in a way that is sustainable and strategic. In particular, we will continue our work with the #ProtectNotSurveil coalition to create and communicate a shared vision of what was at stake, and to connect various threads of our advocacy across migration and digital policy spaces, including through engagement with national partners.  We will continue work on the financing of technology for border management, ahead of the the review of the EU’s Multiannual Financial Framework (MFF) and negotiations for the post-2027 budgetary period; and step up our efforts to support our members’ national level mobilisation and advocacy to resist harmful trends in member states that seek to expand data gathering and exchange across sectors for immigration control purposes. 

In the upcoming period, the SHARE Foundation and Politiscope will continue their commitment to the Monitoring AI-backed Surveillance Architectures (MASA) project. Both organizations will maintain a vigilant focus on the deployment of advanced technologies in Serbia and Croatia. They aim to advocate for enhanced human rights standards in the development and application of legislation at both the national and EU levels.

Recognizing the significant challenges posed by vulnerabilities such as state capture and weak institutions, which can lead to human rights abuses through the misuse of advanced technologies, the proper implementation of human rights standards is deemed essential. This is crucial for bolstering institutional independence and the rule of law. Our efforts will particularly focus on the impact of AI misuse on vulnerable groups.

In the forthcoming period, we will monitor the implementation of key EU regulations, including the AI Act, Digital Services Act (DSA), and Digital Markets Act (DMA), while engaging with decision-makers and stakeholders to influence the evolution of artificial intelligence in Europe, concentrating on Serbia and Croatia. Additionally, we advocate for Serbia’s further integration into the EU's digital regulatory framework and aim to establish a secure digital environment in this often overlooked region of Europe. 

Making Regulation Work Programme grantees: 

Gesellschaft für Freiheitsrechte e.V. (GFF) promotes democracy and civil society, protects against disproportionate surveillance, and advocates for equal rights and social participation for everyone. Its strategic litigation efforts include combating algorithmic discrimination by state and private actors.  

With the help of the European AI & Society Fund, GFF will strengthen its efforts to complement its strategic litigation with advocacy work at a pivotal time in EU platform regulation. With the entry into force of the AI Act and other regulatory provisions and laws, individuals will gain unprecedented new means of enforcing their fundamental rights against platforms, for example in regard to their increasingly automated content moderation processes. Whether those instruments will lead to a tangible increase in platform accountability will depend on the nature of their enforcement.  

Germany, as the largest EU Member State, will play a central role in this process to a strong enforcement practice. Therefore, GFF is aiming to make this regulation work by advocating for effective domestic implementing laws and bringing legal actions forward. GFF will thus identify strengths and weaknesses of the new regulation to assess which parts of the regulation can be used to create more accountability and which will require strengthening in the future. This form of legal monitoring is crucial for a good and effective national implementation. Overall, GFF wishes to take an active role in these developments by promoting a fundamental rights-based approach to AI systems and laying the groundwork for future effective litigation against infringements on individual rights.  

ICCL Enforce will build on its technical and policy expertise to implement and enforce the EU AI Act, and to strengthen global AI governance. To make the EU AI Act enforceable, ICCL Enforce will work with law makers on the code of practice for general purpose AI, delegated and implementing acts. This will be complemented by working with the AI regulators to set the tone for the enforcement regime of the EU AI Act. At the global level, ICCL Enforce will work with global forums such as the United Nations. In addition, ICCL Enforce will also work on (1) the environmental impact of AI, and (2) the concentration of power and anti-competitive practices of AI companies. 

Recent articles
Launching our new Build & Breakthrough initiatives to meet the AI challenge

The European AI & Society Fund is launching Build & Breakthrough, two new initiatives to empower public interest advocates across Europe to drive the agenda on AI. Bringing an additional €10m into the field over the next 18 months, we aim to scale philanthropy’s response.

What we learned from our renewals process

This year we renewed funding for 21 of our policy and advocacy grantees. In this blog we share our reflections and learnings from the process, in the hope it will help other grantmakers that face similar questions.