Interview with Sam Jeffers from Who Targets Me: is generative AI changing election campaigns online? 

Sam Jeffers is founder of Who Targets Me, a small group of activists creating and managing a crowdsourced global database of political adverts placed on social media. Who Targets Me advocates for more transparency around political advertising in the digital age and creates transparency with their tools.

Tell us, how the story of Who Targets Me started? 

Sam: Who Targets Me was founded after several years working on digital political campaigns, back when the consensus was they were liberating and a positive democratic force. By 2017, that seemed a naïve assumption, so we created a new organisation to better understand how technology and political campaigns interact.

The capture of the internet by a few very large platform companies meant that campaigns felt they had to buy millions of dollars of paid ads on those services to reach the billions of voters spending time there. Who Targets Me saw the need to hold those ads accountable, first by creating transparency where none existed, then by using the transparency that platforms were forced to create to build tools to better explain what was going on.

Who Targets Me is now a team of four (soon to be five), researching and explaining the use of digital political ads in campaigns all over the world.

You are researching how generative AI tools are used in political campaigning and online political advertising. What have you found and what can we learn from your findings? 

Sam: The main thing to understand is that so far, to any great extent, they are not being used. From conversations we’ve had with political campaigners, they are exploring using Large Language Models (LLMs) to solve ‘blank page’ type problems, where they just need something to get the strategic and creative juices flowing, and to look at how to lightly customise messages by rewriting them for specific audiences. But in terms of a broad integration into campaigning practice? We don’t see that happening yet.

The reasons are quite practical. Across Europe, most parties and campaigners still run very simple digital campaigns, often with quite a small staff and budget. While some might argue that’s a perfect environment for AI to get used (potentially lazily, without thinking about the consequences), the reality is that it doesn’t seem like there’s much organisational trust of these tools, and doing it “the old way” is still preferred. 

With campaigns using these tools so rarely, the most frequent uses we actually see are in ads from financial scammers, using deepfaked politicians (such as Rishi Sunak and Leo Varadkar) to trick people into fake investments. The platforms who allow these types of ads to run need to do more to protect consumers, because unlike a political ad, where the marginal effect across lots of people is important, a few people getting scammed for tens of thousands is a substantial problem for the victims.

Are hyper-personalised and targeted ads created with AI the new reality in political campaigning? 

Sam: No, I don’t believe they are, or even will be, at least in the current social media era. The challenge for the idea of “deeply personalised ads” is twofold. The first is there’s not a great deal of evidence it’s very effective. The messages that win you elections, even the most notorious ones, are usually those that capture a wider message and moment (“Take Back Control”, “Make America Great Again”, “Yes We Can!”). There’s this idea you can match up a person’s interests, with their material circumstances, some of their prejudices and their personality type to create a massively persuasive message. It sounds like it would be appealing to campaigners on paper, but even the largest and most sophisticated corporate marketers struggle to do anything like this, so it feels a long way off for politics, which is tiny by comparison.

The second thing is that platforms have been very sensitive to what happened after the Cambridge Analytica scandal of 2017, and have generally made it harder to deeply personalise ads. For example, you can’t use names in ads, and the transparency ad archives mean you can see roughly what’s being said and to whom. Most of the services have already put out new rules for generative AI, saying it needs to be labelled in political ads, or not used at all. What stems from that, and the point above, is that most political advertisers, even the biggest ones, only run lots of variants of ads for targeting and analytical purposes, rather than to customise content for very small groups of people. 

So I think until we get to some potential future place where we all use personalised AI agents to do stuff for us, and that those agents will pass along messages from political actors to us, I think we’re still a way off that hyper-personalised ‘future’, and that’s mostly down to the way that social media currently works and how people get their information.

Numerous elections are happening in 2024. Do you have any recommendations for tech companies and public policy responses to address AI’s impact on elections? And what interventions should be avoided? 

Sam: I think the platforms need to be incredibly hard on AI misuse. If a politician uses generative AI to create content that’s egregiously false or misleading, they should face some pretty serious consequences from the platform, right up to the point of losing their account or their ability to advertise.  

Do we need clearer legal rules around this? For sure, but I do think platforms are strongly incentivized to deal with this themselves too. If their services become known as places where you really can’t trust what you see, then the value of those services falls. This feels self-evident with the decline of Twitter. Yes, a very rich platform company or owner can burn their money to ‘defend’ some principle or other, but users seem to prefer online communities that follow norms and are sufficiently policed. As for the AI companies, such as OpenAI etc, they need to hire people with elections experience and ramp up their moderation efforts quickly. I don’t think many people will use their services to learn about campaigns in any meaningful way, but they’re clearly under-resourced in these areas, and there will be a run of stories this year that’ll expose that. It’s still too easy to make them do things that these companies haven’t planned for.

And when it comes to what not to do – I think two things.

First, I’m still somewhat concerned about the idea that you can just label bad content and it makes the problem go away. I don’t think anyone’s done enough testing to work out how effective that really is, and what little experimentation has been done finds very different answers across different time periods and groups. We’ve built some new tools to work with partners to try and do that in an independent way, and it feels like an important research agenda for the future, because inadequate or partial labelling will likely end up creating a ‘trust gap’, where people just don’t know what to think.

The second thing is the hype itself. You can barely turn around without someone telling you that AI is going to destroy democracy. That simply isn’t going to happen this year. Political campaigns have always been full of lies and half truths, and we’re seeing them, as normal, right now. The biggest political liar of our time – Donald Trump – doesn’t need a hand from AI to stir things up. By over-hyping the role of AI, we spend less effort on the ways in which we could make democracy more robust, and hold political actors more accountable. 

How can activists and civil society groups use tools like the ones developed by Who Targets Me to keep election campaigns accountable?

Sam: We’re building tools to make it easier to understand political ads. Political ads aren’t everything in a campaign, but they are one of the primary ways politicians try to reach voters in an unfiltered, targeted way. If they can find the money, the incentives to buy as many as possible are very strong.

So we build stuff to track all of that spending, the audiences being targeted and what’s being said in the ads. We’re already pretty good at the first two of those, with lots of relevant data and automation that helps us and the many civil society and journalistic partners we work with to analyse what’s going on (get in touch if you want to collaborate on this!).

The last part – the content – is more difficult because there’s a lot of it, and it takes different forms (text, images, video, audio). We’re actually experimenting with some AI to see if it can help us summarise the large volumes of ads we see, understand which themes are present in them and how prevalent they are. We’re hoping to launch a new tool that does this before the EU elections.

Overall we think that if you – whether you’re a voter, a journalist, an academic or a regulator – can better understand those three things – spending, targeting and the messages being used – you’re in a much better place to hold those campaigns to account.

You run a seasoned organisation working on transparency and accountability. What are some of the recurring challenges in your work? 

Sam: We’re a small organisation but a very ambitious one. Over the next couple of years we think we can evolve our activities to become the primary clearing house for online political ad transparency in the world. Our goal is to continue to build really great layers on top of the available transparency information, where people can see what’s going on, anywhere, any time, in lots of different ways.

The challenge for us is getting there. We’ve built many of the tools we’d need, and have a pretty evolved model for how we work with partners locally to help them run local ad transparency projects and campaigns. The next step is to build on top of that and make it truly sustainable over the long term and create a team to support it. Political ads aren’t going anywhere any time soon, so this layer of shared infrastructure feels like something that’s really needed. The sheer number of enquiries we’re getting from researchers and particularly journalists, who don’t have the capacity to build all that from scratch, is telling us that.

What books or podcasts that you’ve enjoyed recently would you recommend to our readers? 

Sam: I thought Alix Dunn’s “Computer says maybe” episode on AI and elections was perfectly pitched, in that it didn’t get carried away with hype, and instead focused  on the potential for specific communities to be targeted and affected by misleading AI this year. That’s where the work needs to focus, not on the idea that a big Joe Biden deepfake is going to suddenly swing the US election this November.

I also find Dan Williams’ recent writing on AI and disinformation worth reading. His work tries to situate these challenges in a wider context, one that anyone running a political campaign would recognise: it’s actually really hard to persuade many people of anything. That doesn’t make these problems go away, but it does emphasise the need to put your focus in the right place, and avoid thinking that the relatively low prevalence of deepfakes or disinformation that reach the median voter are a big problem. The right response is therefore to work to understand how these problems affect specific communities or target particular issues.

Recent articles
Welcome aboard, Peggye! 

Please join us in extending a warm welcome to Peggye Totozafy, our newest team member! Peggye steps into the role of Senior Partnerships Manager with a wealth of experience in impact philanthropy, ready to lead our efforts in fostering meaningful connections with partners.

Interview with Nani Jansen Reventlow, the Founder of Systemic Justice: Making litigation accessible as a tool for change.

We caught up with Nani Jansen Reventlow from Systemic Justice about what it means to support community-driven litigation and what Systemic Justice has learned about how communities experience technology-enabled harms.