Health Action International
Protecting patients’ care and rights from AI-driven healthcare systems
With AI being used to support elderly care, accelerate medicine research and development, diagnosis and mental health support, AI systems risk deepening health inequalities by using inaccurate health data, reinforcing bias and stripping patients of autonomy. Patient data captured by AI systems is also at increased risk of data breaches, and secondary use for commercial purposes by technology and pharmaceutical companies.
To counter this, the European AI & Society Fund funded independent non-profit organisation Health Action International (HAI) to become an expert on AI and health – and share their knowledge with civil society, policymakers, practitioners and journalists. With our support, they also developed their leadership skills and went on to create the Digital Rights Health Alliance, a coalition of other health justice organisations advocating for fairer application of AI in healthcare.
The challenge
With overwhelmed public healthcare systems and a profitable pharmaceuticals industry, European governments and venture capitalists are investing heavily in health tech, including AI-driven systems. The AI Act goes someway to protecting patients’ rights. But ongoing lobbying from the med-tech industry is threatening patient privacy, as they push to use patient data for commercial purposes.
In addition, healthcare data is full of bias and inaccuracies which, when fed into AI systems can easily be amplified, discriminating against vulnerable, marginalised communities. This could lead to misdiagnosing illnesses, and limiting access to appropriate care, healthcare technology or medicine. Poor transparency about how AI-driven healthcare systems make decisions can deepen health inequalities and strip patients of autonomy, dignity and appropriate care.
As the European Union (EU), World Health Organisation (WHO) and other institutions develop new regulations, there is a need for civil society to advocate for patients’ rights around AI.
In addition, healthcare data is full of bias and inaccuracies which, when fed into AI systems can easily be amplified, discriminating against vu
The actions
Based in the Netherlands, Health Action International (HAI) is an independent non-profit researching and advocating to enable access to safe, effective, affordable and quality-assured medicines and rational medicine use for everyone, everywhere. With support from the European AI & Society Fund, HAI developed its expertise and advocacy in AI and healthcare.
Protecting patients’ data protection rights
The AI Act classifies AI applications into four risk categories – unacceptable, high, limited, and minimal (or no) risk. An application is classified as high-risk if it could negatively affect the health and safety of people, their fundamental rights or the environment.
During the AI Act risk classification process, HAI successfully engaged Members of the European Parliament and policymakers and advocated for the retention of medical devices as high-risk AI, despite fierce lobbying from the med tech industry. Based on their research, they also called for the classification of more health-related AI as high-risk under the Act and for a fundamental rights impact assessment which includes impact on health for all deployers of AI. HAI’s recommendation on which AI-related health service and health tech should be prohibited was adopted by the European Commission in their guidance.
To ensure patients have control over the use of their health data, HAI also successfully advocated for patients to be able to opt-out their data being processed and used for secondary purposes (such as research or commercial purposes) under the European Health Data Space regulation – a common framework for the use and exchange of electronic health data across the EU.
Highlighting health-related AI risks
AI has been known to completely fabricate clinical studies. And as our grantee partner Civio has found in Spain, AI is misdiagnosing melanoma cancers. So HAI has also spoken about the risk of AI being used for diagnosis and medical research and development to the European Medicines Agency Big Data Steering Group, the WHO and pharmacy students.
With an ageing population in most of Europe, AI is also being applied to care for the elderly. Centering patient experience, HAI produced animations to explain the potential applications and risks of AI such as loss of privacy, consent, discrimination, and the fact that elderly patients are less able to advocate for themselves. These have helped raise awareness of the issues with media and built strong public interest, ahead of the Dutch government releasing a strategy on healthcare and AI.
How the European AI & Society Fund helped
Since 2021, two back-to-back grants, capacity building and leadership training from the European AI & Society Fund, helped HAI develop into an authority on AI and healthcare, backed by their publications and educational resources on the application of AI in healthcare which has supported their advocacy work.
They have successfully engaged with health institutions including the European Medicines Agency Big Data Steering Group, the European Commission, and Heads of Medicines Agencies. Significantly, the WHO took HAI’s concerns regarding its new Smart AI Resource Assistant for Health (S.A.R.A.H.) bot seriously – the WHO now specify it’s a prototype unable to give medical advice. They have also revised their internal digital governance to better align with their own AI ethics guidance, and invited HAI to join a multi-stakeholder programme related to their Global Initiative on AI for Health.
HAI has also built-up is credibility with journalists and is often sought for comment, helping shape the narrative around AI and healthcare, and particularly elderly care. They have also successfully engaged with European Parliament and Dutch MPs and continue to influence policy.
Finally, with our support, HAI was able to forge the Digital Rights and Health Alliance. The Alliance coordinates advocacy on tech and health policy in Europe through networking, information exchange, strategising and sharing of best practices among civil society organisations, academics and other relevant stakeholders. Members research and develop policy on AI, data privacy, smart health technologies or digitalisation in the health sector. For example, they wrote a joint letter in the British Medical Journal on “Why the EU AI Act falls short on preserving what matters in health”. They are now focusing on the health impacts of AI-related smartphone apps on people’s health.