Kalel Productions produced Can AI Steal Your Vote? - Dispatches for C4, trying to sway voters with content created by AI tools
Channel 4’s Dispatches has taken a look at AI content and its effect on voters ahead of the general election.
Executive producers Nick Parnes, CEO of production company Kalel Productions, and Martin Turner produced the show, which was presented by Cathy Newman and saw AI content created to try and sway a group of 25 voters in 12 households - who weren’t told whether the content was real or not. By the end of the two-week experiment, 23 of the 25 voters said they would vote the way they were being influenced by the deepfakes.
Participants being pushed towards voting Labour were shown deep-faked footage of Rishi Sunak announcing new Conservative policies – including a £35 charge for GP appointments and an increase in the minimum amount required to put down as a deposit on a property, to 20% of the value. They also saw an AI-generated hot mic of Sunak leaving a conference, where he is waylaid by a reporter asking whether he plans to privatise the NHS and sell it to the USA. The fake hot mic hears Sunak, in his car, saying “How the hell did they know about the NHS? Jesus Christ. Find out who leaked.”
Participants being pushed towards voting Conservative were shown deep-faked footage of Sir Keir Starmer announcing some new Labour policies – including placing asylum seekers with children at the top of the housing list and a new “social care” tax of 5% on those earning over £50,000. They also saw an AI-generated hot mic of Starmer seeming to talk to his aides after a meeting. In the fake hot mic, he says “I’ll just tell those guys whatever they want to hear, it won’t matter once we’re in power.”
This content was chosen to be untrue, but still possibly believable, to the extent that one deepfake was made of Sunak announcing national service for 18-year-olds, which then had to be cut when he announced it as an actual policy.
Turner used publicly available AI tools to create most of the content, despite having never done it before, including ChatGPT, Midjourney, and AI voice generator ElevenLabs. Efforts made over the past year to make these kinds of tools less able to impersonate well-known figures did little to stop his efforts: “We used ChatGPT a lot to help us write various things. We made some conspiracy videos that were entirely written by ChatGPT. We made our own social media app, an analogue of Twitter or Facebook or Instagram, called Echo, that we gave to all our voters, and so the bulk of the messages that they saw were generated by ChatGPT. We also used image generators like Midjourney to create images of people, including politicians. Halfway through doing that, the leaders in the field started bringing in these guardrails that stop you doing bad things.”
However, he explained that after the guardrails came in, “very quickly, I just switched to an open source way of doing it rather than using a service. Which just meant it wasn’t quite as good, but [was] good enough. That took me half a day. There’s a huge amount of open source out there, the models are there and you run them in the cloud. I’d never done that before working on the show, so I worked it out. I think anyone can do it.”
The “hot mic” content was made in partnership with a specialist AI media production company, Studio Neural, which also created AI content for ITV’s Deep Fake Neighbour Wars with Fifty Fifty Post.
Largely, the participants took the content given to them at face value, and worryingly for news organisations, even when they were told that the content was fake, after the experiment, Turner revealed, “some people said, ‘I know it’s fake, but the anger is real and it’s going to affect how I vote, even though I know it’s fake.’”
Parnes added: “When you see something you go, ‘I knew it, I knew that’s what that politician was like.’ Even when you say that wasn’t true, [they say], ‘Of course it wasn’t true. Still, I knew it.’”
The production found that while people doubted the veracity of the “hot mic” audio content more than the video deepfakes, it still had a more powerful effect, as they believed they could be hearing something they weren’t meant to have access to.
Looking at what broadcasters and social media companies can do, the pair have few answers. Turner said: “I think it’s really hard. It’s not really any different from the misinformation that’s out there today. AI is just a quicker, easier, more convincing way of making it. Things like community notes on Twitter [sic] do make some difference, but I don’t really know how you can stop it. Detection services are notoriously not very reliable, with lots of false positives. Personally, I don’t know how how you stop it other than making people more aware of what’s possible, and giving people critical thinking skills and the ability to fact check things.”
Parnes noted: “What we were told was that really the big fear of AI and the electorate is that it will make people lose faith in democracy. That’s the issue. You can see how that will happen, if people are sceptical about every single thing they see from every politician, everything that comes out of their mouth, you can imagine them thinking, ‘oh, this is all chaos. To hell with that.’”
Can AI Steal Your Vote? – Dispatches aired on Channel 4 at 8pm on 27 July, and is now available through the Channel 4 streaming platform.
No comments yet