Matt Stagg explains why sports broadcasters are particularly susceptible to deepfakes and how to avoid getting caught out

Tom Cruise

In 2023, a deepfake video (pictured above, a screenshot from the film) featuring an AI-generated voice of Tom Cruise began circulating online.

Styled like a Netflix documentary – but completely fake – it criticised the International Olympic Committee and formed part of a coordinated disinformation campaign designed to undermine trust in the Paris 2024 Olympics.

By mid-2024, Microsoft’s Threat Analysis Center confirmed the video was part of a broader campaign by the Russian-affiliated group Storm-1679 to manipulate public perception of the Paris 2024 Olympics.

This wasn’t parody. It was synthetic media engineered to deceive, destabilise, and disrupt one of sport’s most prestigious global events.

If it can happen to the Olympics, it can happen anywhere in sport.

Deepfakes are synthetic media created by artificial intelligence to make people appear to say or do things they never did.

Once a quirky internet trend, they’ve evolved into tools capable of manufacturing scandal, distorting reality, and undermining public trust.

Sport is particularly vulnerable. It’s emotional, fast-moving, and rooted in trust—between athletes and fans, broadcasters and audiences, brands and institutions.

What makes deepfakes uniquely dangerous in sport is their ability to damage reputations and credibility in real-time. Examples could include a fake video of a footballer admitting to using performance-enhancing drugs; a fabricated post-match clip of a manager insulting players; and a manufactured press conference announcing a transfer or retirement.

Even when debunked, the reputational impact lingers. In the social media era, a fake can outrun the truth with ease.

For broadcasters, the stakes are enormous. In a landscape where speed matters more than ever, the temptation to publish fast—without verification—is constant. But a single error, like airing a convincing deepfake, can seriously damage credibility and public trust.

The risks aren’t just reputational. Broadcasters who accidentally share manipulated content could face legal action, commercial backlash, and a breakdown in trust with governing bodies, rights holders, and audiences.

Thankfully, the same AI technology that creates deepfakes is also being used to fight them. Detection tools are improving, using pattern recognition, lip-sync analysis, and forensic video inspection to flag suspicious content.

But this is a technological arms race. As detection tools evolve, so do the fakes. No solution is perfect, and editorial judgement still plays a critical role.

How to get ahead of the threat:

  • Establish rigorous verification workflows for footage from non-trusted or user-generated sources.
  • Use AI-powered tools as part of your editorial checks, especially for video that appears too timely or controversial.
  • Train producers, editors, and journalists to understand how deepfakes are made and how to spot red flags.
  • Be transparent with audiences about how you vet footage. Trust grows when viewers know you’re actively protecting them.
  • Collaborate with other broadcasters, rights holders and tech firms to establish shared response protocols and early warning systems.

The Tom Cruise IOC deepfake wasn’t just a PR stunt, it was a geopolitical information attack aimed squarely at global sport. It showed how easily synthetic content can be weaponised, and how unprepared many parts of the industry still are.

If broadcasters don’t act now, the next deepfake crisis could involve an athlete, a manager, a referee—or even the broadcaster itself.

The use of AI-generated misinformation will increase before regulation catches up. Early steps, like the EU’s AI Act, are promising—but the broadcast industry can’t wait for government frameworks. It must lead the charge in setting its own standards.

Future sports fans will consume content across more platforms, from more sources, and in more formats than ever before. The challenge for broadcasters is to protect trust across all of them.

Deepfakes aren’t just a technical challenge, they’re a credibility crisis in waiting. For broadcasters, rights holders and anyone involved in the storytelling of sport, there’s a clear choice – be proactive, or be caught off guard.

Matt Stagg

Matt Stagg is a sport, media and entertainment technologist