Interview with Martha Dark, Foxglove

 

Foxglove is a UK based non-profit that works to build a world where the use of technology is fair for everyone. When the powerful misuse technology to oppress or exclude, Foxglove investigates, litigates and campaigns to fix it. Foxglove is part of the European AI Fund’s 2021 Open Call cohort.   

Artificial Intelligence systems have already changed the exercise of power across Europe, yet the public are rarely consulted on their use.  Foxglove aims to build a popular counter-current – to undermine the myth that “AI” is a specialist field in which only programmers and data scientists have a say. Foxglove brings legal cases and tells stories which will contribute to building a vibrant democratic response to AI and automated decision making. The organisation is also working to empower platform workers to speak up about the automated systems that control their working conditions and to bring (and communicate) legal challenges. 

We talked to Martha Dark, Foxglove’s Director and Co-founder to find out more about their work.

How are you working to promote transparency in how governments and big corporations use new technologies? 

Martha: Technologies in both the public and private sector are frequently used to make hugely, consequential decisions about people’s lives, who gets a visa, which university a student gets in to, who gets hired and who gets fired.  Yet many of these systems are still developed behind closed doors without proper transparency or public consultation.

At Foxglove we use litigation and public campaigning to make the use of technology fair in both the public and private sector. Many of our early successful UK judicial review cases challenging unfair or discriminatory government algorithms started out as transparency challenges. Take for example the Home Office visa streaming algorithm, that was a system that had been in use since 2015. It made unfair decisions about who could visit this country based in part on the applicant’s nationality – but nobody even knew the algorithm existed until 2019. A more recent example, we’ve been supporting the Greater Manchester Coalition of Disabled People to challenge a government algorithm which targets disabled people for benefits fraud investigations in a disproportionate, discriminatory way. Yet the Department for Work and Pensions repeatedly refused to provide more information about how this system works. Our investigation, litigation and campaigns lift the lid on these secretive systems and challenge them when they are unfair.

We can’t challenge what we can’t see, and it is vital that these systems are developed in the open, with the communities they impact engaged and meaningfully consulted on the use of the systems to ensure community trust in this type of decision making. We’ve seen some positive progress recently, the UK Cabinet Office has developed an algorithmic transparency standard, which is based on existing successful algorithmic transparency models implemented elsewhere in Europe. We’ve been working with local authorities across the UK to encourage the piloting of the system. The standard isn’t yet mandatory, but it is a positive step and I welcome the adoption of this across government.

The power imbalance between big tech and ordinary citizens is huge and creating systemic change will take all of us. Change can only be accomplished by a large movement. The law has a role to play in that – but so does political action designed to change the mood of our public square into one that supports breaking up the tech giants. It also requires a popular mass movement, rejecting Big Tech’s overreach into our personal lives and raising public awareness of their dodgy business practices and appalling treatment of their workers, especially at Amazon and Facebook.

What are the biggest obstacles you encounter when challenging abuses? 

Martha: The challenges of being a small and new(ish) non-profit standing up to some of the biggest tech giants on the planet are endless. For example, big tech companies like Facebook, Google, Amazon and Uber have spent years perfecting their cultures of fear and secrecy to slow workers’ efforts to unionize and speak out against their harmful working conditions. The culture of secrecy and fear created by Facebook presents challenges to our work supporting workers. This workforce are essentially gagged by overly restrictive NDAs which try to prevent them from even talking to family about their work.

What are the main risks and opportunities of AI and tech that you observe from your work?

Martha: Anyone can see the benefits that the incredible advances in technology in the last 20 years have had along with the potential to transform people’s lives for the better. The problem I see is the unfair distribution of the benefits brought by technological change and the potential for abuse by powerful parties – whether individuals, governments or big tech platforms – to amass incredible wealth and power at the expense of the rest of us.

We’ve been thinking a lot about the risks and opportunities of technology in the context of healthcare in the UK. Technology and data clearly had and have an incredibly important role to play in the pandemic response, but at the start of the pandemic, we saw major tech firms, from Palantir to Amazon, viewing the pandemic as an opportunity to embed themselves into the UK’s national health service. Our NHS holds the largest set of machine-readable health data in the world, making it enormously attractive to tech giants. We feared this would deepen their power and, might entrench unfair algorithmic systems later (by for example, facilitating problematic cross-government data sharing). We launched a campaign #NoPalantirinourNHS in an effort to ensure that NHS data remains a public asset used for public good.  We also built a coalition – of pensioners, journalists, doctors and politicians and together we successfully paused the pooling of UK national health data in the NHS data grab. But the future is still murky – so we’re carefully watching to be sure that where technology has a role in pandemic systems on a temporary basis, that they don’t suddenly pivot to a permanent state without proper procurement processes and safeguards in place.

What is the achievement that you’re most proud of?

Martha: Just this week, and alongside an amazing Kenyan lawyer Mercy Mutemi and her firm Nzili and Sumbi Advocates we have supported former Facebook content moderator Daniel Motaung to launch his case against Facebook – and their outsourcing company Sama. In the case Daniel is aiming to end the exploitation and union-busting of content moderators in Kenya.

We’ve been fighting to improve the dire workplace conditions of Facebook content moderators since 2019. This is the workforce of people that swim through hours on end of toxic content so that it doesn’t make it to our computer screens. They have totally insufficient mental health care and many of this workforce contract PTSD from the work. This is the first case of its kind, Daniel is incredibly brave to stand up against Facebook and we are proud to be supporting him. When Cori, Rosa and I founded Foxglove in 2019 we all felt strongly that supporting tech workers and helping to build tech worker power offered a major strategic check to tech platforms’ power. So filing the case this week is a big moment for us. You can read more about it here.

 

Recent articles
Welcome aboard, Peggye! 

Please join us in extending a warm welcome to Peggye Totozafy, our newest team member! Peggye steps into the role of Senior Partnerships Manager with a wealth of experience in impact philanthropy, ready to lead our efforts in fostering meaningful connections with partners.

Interview with Sam Jeffers from Who Targets Me: is generative AI changing election campaigns online? 

The European AI & Society Fund spoke with Sam Jeffers from Who Targets Me about how generative AI is changing online political advertising and campaigns