Interview with Janneke Oirschot, HAI

 

Health Action International (HAI) is an independent non-profit organisationIt conducts research and advocacy to advance policies that enable access to safe, effective, affordable and quality-assured medicines and rational medicine use for everyone, everywhere. HAI focuses on creating long-lasting changes to government and industry policies and practices, rather than on temporary solutions. 

HAI’s goal is to strengthen the existing medicines policy expertise by recognising, understanding, and reflecting the impact of AI/ADM in our research, advocacy, and communications activities. We have talked to Janneke van Oirschot, HAI’s Research Officer to find out more about AI’s impact on healthcare.

 

Where do you see the biggest risks and opportunities of AI and tech for healthcare?

Janneke: Well, for those of us working in global public health the model for pharmaceutical development, ‘from bench to patient’, is flawed. In recent years, COVID aside, pharmaceutical R&D has produced little in the way of real advance for global public health. It has produced some medicines for so-called orphan diseases (very rare with a small patient group) but on the whole it continues to replicate ‘me-too’ drugs which add nothing of therapeutic value to the pharmaceutical armoury of substances which the WHO classifies as ‘essential’. If AI is to simply assist the business model of the pharmaceutical industry, by accelerating access to market, screening of more molecules for potential activity, requiring fewer subjects enrolled in clinical trials and so on, it will only increase the number of superfluous products on the market. On the other hand, it will make the life of the already massively profitable pharmaceutical industry even more profitable, and it will be business as usual – medicines for markets and not for patient needs.

More specifically, there are considerable risks from AI in healthcare. For example, the risk of replicating bias, which is found in all types of healthcare data or risks of decreased human agency when systems with lack of transparency and explainability result in patients being unaware what’s being used on them, or even physicians making decisions with limited information. On top of that, there are many concerns around data privacy and ownership, as sensitive and personal health data are now marketed and used on a huge scale for commercial purposes.

On a higher level, technological advancements in healthcare bring other risks when they influence the distribution of healthcare resources, and if they will only be available or accessible for a select group of people. Especially when technologies will foster the move to a more individualised precision model of medicine, they might only be adapted to work well for specific demographics and might only be available to certain privileged groups of people. This can cause major risks of incorrect diagnosis and treatment for minority groups, as well as have major effect on health inequality in and between countries. All of this illustrates the need for comprehensive regulation of health-related AI in the EU. Unfortunately, the current regulatory proposal does not sufficiently address these concerns.

The opportunities of AI in healthcare lie in decreasing the burden of routine administrative tasks, as well as in diagnosis from digital diagnostics (ECGs, images, scans), where AI is already performing quite impressively. However, close attention should be paid to how these systems can be utilised in clinical practice to improve care quality, reduce physician burden and improve patient outcomes. Only if they are of proven added benefit on these aspects should they be deployed. In the medicine development cycle the increased availability of data, computing power and machine learning technologies allow for many new ways to predict medicine targets, novel chemical structures and optimal synthesis, repurpose existing medicines, and find new candidate compounds. On top of that, it can foster better insight into the effectiveness and side-effects of medicines in real world settings by using real world data-sources and evidence.

What actions do you take to advance policies that enable access to safe, effective, affordable and quality-assured medicines and rational medicine use for everyone? 

Janneke: As mentioned above we are not a big fan of the biomedical R&D model, which tends to lead to market crowding and the promotion of expensive new formulations for ‘me-too’ drugs for which there are perfectly good existing therapies. Because R&D is incentivised by monopoly intellectual property rights, the price of new medicines is often eye-wateringly unaffordable. Again, the pharmaceutical industry will therefore chase down profitable markets rather than real health needs. That said, for the industry itself, it is a model that works incredibly well and makes them among the most profitable in the world. We take action to encourage R&D that is patient focused and responds to the real needs of global public health; looking for ways around the IP framework that strangulates low-income countries’ ability to access medicines; promote policies that keep prices affordable; making sure that countries, particularly in the global South, have essential medicines policies on the statutes; that healthcare professionals are not subjected to undue promotional activity, and that patients are informed about the medicines they are taking. To name but a few!

In the context of our AI programme, Health Action International conducts research and advocacy to improve policies and regulations on AI in healthcare and medicines development in the EU. At the moment that means we are closely following the EU’s AI Act legislative process and advocate with MEPs for better regulation of health-related AI. We’ve developed a report which analyses the consequences and gaps of the AI Act regulatory proposal for health-related AI. Further, we follow the work of the Big Data Steering Group of the European Medicines Agency (EMA) and Heads of Medicines Agencies (HMA), who are tasked with the mandate to increase the utility of big data in medicines regulation. By delivering a regulatory system that is able to integrate Big Data into its assessment and decision making, it envisions supporting the development of innovative treatments more quickly and optimising the safe and effective use of medicines. We want to ensure their actions truly benefit the regulation of medicines and public health, and safeguard against the risks of AI.

What are the biggest challenges of AI’s application in clinical trials design?

Janneke: I would say the biggest challenge here is to evolve AI-assisted clinical trials into a model that truly benefits patients and the trial rigor. Clinical trials are often very inefficient, costly, and burdensome for patients, so the need for change is clear. However, tensions exist between public and private interests. Most of the AI systems currently used in clinical trials are ultimately developed with commercial interests. Therefore, innovations are more focused on how to increase chances of getting approval for a developed medicine, than on how to make trials better for patients, and how to improve the quality, diversity and inclusiveness of trial data and endpoints. Of course, sometimes these goals interlink. For example, higher trial participant satisfaction is likely to result in higher retention rates and hence a higher chance of trial success. But such interests don’t always align, an example being in the selection of endpoints—pharmaceutical companies will be happy with the easiest-to-measure option that will be approved by medicines authorities, while patients are likely to opt for the most clinically meaningful endpoints. Ultimately, clinical trials need to be less burdensome for patients, have high transparency, objectivity and scientific rigour, and use clinically meaningful endpoints. If AI technologies can help achieve this, great. But if they are merely developed for financial and efficiency gains on the side of pharmaceutical companies, they do not represent true progress for clinical trials. Right now, patients are largely left out of the discussion on AI innovations for clinical trials. Investments should be directed at rethinking the clinical trial model to make it more reliable, transparent, inclusive and patient-centred, and explore the added value of AI technologies in this context. 

Would you share some of the things you learned so far?

Janneke: Media headlines should be taken with a grain of salt when they are talking about AI innovations in healthcare. Often, when you read the studies, results are underwhelming, clinical utility unclear, and snags abundant. Another thing that surprised me is that the quality of research on new AI models in healthcare is often substandard. A Nature study published last year, for example, showed that none of the models developed to detect COVID-19 from chest radiographs or CT images identified in 62 studies were of clinical use due to methodological flaws or underlying biases. Nevertheless, AI in healthcare is a reality today, and one that we should carefully monitor and regulate to ensure the health and wellbeing of individuals is prioritised.

 

 

Recent articles
Welcome aboard, Peggye! 

Please join us in extending a warm welcome to Peggye Totozafy, our newest team member! Peggye steps into the role of Senior Partnerships Manager with a wealth of experience in impact philanthropy, ready to lead our efforts in fostering meaningful connections with partners.

Interview with Sam Jeffers from Who Targets Me: is generative AI changing election campaigns online? 

The European AI & Society Fund spoke with Sam Jeffers from Who Targets Me about how generative AI is changing online political advertising and campaigns