For a fair, inclusive and sustainable future, we must ensure that Artificial Intelligence is developed and deployed to best serve the needs of people and society.
Too often AI is shaped by the interests of governments and the tech industry without adequate concern for the individuals and communities it affects. This can lead to technologies that entrench inequality, exacerbate social injustice and undermine people’s rights.
That’s why the European AI & Society Fund empowers a diverse ecosystem of civil society organisations to shape policies around AI in the public interest and galvanises the philanthropic sector to sustain this vital work.
We support organisations from across Europe that want to shape Artificial Intelligence to better serve people and society. Some of our grantees have previous expertise working on AI, digital rights and technology. Others represent particular communities or fight different kinds of social injustice and want to address how AI is affecting their work.
The European AI & Society Fund is supported by a group of philanthropic foundations that share our mission to shape AI in the public interest.
This list of civil society experts on AI contains profiles and contact information of policy experts, researchers and lawyers who can speak to the media and other stakeholders on issues such as AI regulation, facial recognition, racial justice, AI in health, border surveillance, algorithmic welfare distribution, conditions for workers training Chat-GPT and other key issues of our time.
In this interview with Algorithm Audit, European knowledge platform for AI bias testing and normative AI standards, and one of our grantee partner organisations, we deep dive into algorithm auditing and why it matters for human rights and human dignity.
This summer the European AI & Society Fund opened a new funding stream offering grants of €30.000 to support public interest work around AI. In this blogpost, we reflect on what we learnt from the selection process.
In this interview with Algorithm Audit, European knowledge platform for AI bias testing and normative AI standards, and one of our grantee partner organisations, we deep dive into algorithm auditing and why it matters for human rights and human dignity.