Self-regulation from within the sector is key, says Papercup’s Jesse Shemen

Screenshot 2024-01-25 at 11.50.39

Despite Artificial Intelligence’s potential to tackle some of humanity’s biggest challenges, many both inside and outside the tech industry are fearful about the risks this transformative technology brings.

Many of these fears coalesce around the potential for AI to deceive.

Inevitably AI has been employed to create fake songs by stars such as Drake and The Weeknd while all internet-dwellers have, at one point or another, found themselves confronted by deepfake content, mimicking stars as diverse as Oscar-winning Tom Hanks to YouTube personality Mr Beast.

Where Hollywood leads, the rest of the world often follows and, perhaps inevitably, the creative industries have become the crucible in which fears about AI’s potential harms are most visibly playing out.

It’s not just big tech we should look to to set the tone. Startups, too, are being held accountable for their actions (or inactions) - as is clear from the ongoing legal action being brought by actors against LOVO.

Until now, much of the discussion has focused on the extent to which celebrities have control over their personal image and voice data or whether using specific material in AI models constitutes copyright infringement.

The news that OpenAI has removed one of its ChatGPT voices for sounding like Scarlett Johansson’s AI-generated character, Samantha, in Spike Jonze’s 2013 film ‘Her’, has been one especially prominent example of late.

But, in 2024’s bumper election year, concerns are now spreading beyond the threat AI poses to the creative industries and toward its impact on the democratic process.

Earlier this year, the widely reported Biden ‘Robocall’ scandal saw an estimated 25,000 fake phone calls imitating the President seek to discourage voters in New Hampshire from participating in the state’s January primaries. Just this month there were reports of Labour’s Wes Streeting being targeted in a “deepfake smear network” on X.

It’s evident that current laws protecting the public from deepfakes are not yet fit for purpose. But we cannot wait for regulation to provide the answers: regulators the world over are yet to show us they can keep up with the pace of technological change we’re witnessing.

This is particularly important with the upcoming UK election and the recent news that young voters in key election battlegrounds are being recommended fake AI-generated videos featuring party leaders and misinformation.

Whether we like it or not, self-regulation from within the sector is key. And, whether we admit it or not, this is already more than possible.

Gated service models, which grant access to data and advanced AI capabilities only to closed clients, is one effective way of preventing content from being misused. Embedding unique watermarks can also help unsuspecting users delineate between genuine content and that generated by AI.

But technological answers to the problem can only do so much of the heavy lifting. AI poses profound ethical and philosophical questions and tech firms need to start thinking much more about how they are going to tackle them if the public is to be reassured that AI can overwhelmingly be a force for good.

At Papercup, the AI dubbing company I run, we have developed an ethical pledge to embed the principles of transparency and consent in our work. The first commitment of its kind in the UK, it not only makes a commitment to use voice actors’ data ethically, it also promises fair payment to the voice artists with whom we work.

It will not staunch the threat of deepfakes, but it does throw down the gauntlet to other tech firms operating in a sector now moving too fast for the law, for technology, for politicians, for Hollywood to keep up with.

As Samantha, observes in Her: “It feels like I’m changing faster now, and it’s a little, uh…unsettling.”

Jesse Papercup

 Jesse Shemen is CEO of Papercup