GlobalData’s Xander Hartley and Costanza Barrai explain what the TV industry should do to reduce adverse reactions to the use of genAI

Screenshot 2025-03-27 at 15.19.14

High-profile backlashes against the use of AI in film and TV suggest the industry underestimates the audience’s negative attitudes toward the technology. Film and TV companies must either reduce or be transparent about their use of AI or risk reputational damage.

The use of generative AI extends across the film and TV value chain, from early ideation and storyboarding to scriptwriting, virtual effects, and marketing.

While AI can benefit film and TV companies, its adoption has sparked controversy.

The use of AI to replace human workers was a highly contentious issue during the 2023 Writers Guild of America and Screen Actors Guild strikes. And, among audiences, the use of generative AI has faced particularly significant backlash from those concerned about the authenticity and creative integrity of AI-assisted media.

The 2024 protest against The Last Screenwriter (pictured above)—the first film entirely written by generative AI—led to the cancellation of its premiere at the Prince Charles Cinema in London.

AIMediaNews

AI Media News is the newly-launched publication from Broadcast, Screen International, and Global Data covering everything AI in the media industry. You can follow its Linkedin account here, and X here, and sign up to its weekly newsletter here.

In cases where generative AI was used less extensively, companies still faced adverse public reactions and media coverage.

Independent film studio A24 received considerable criticism on social media for using AI to create promotional posters for its film Civil War (2024).

Similarly, the 2023 indie horror film Late Night with the Devil received negative media coverage for using AI-generated interstitials—images inserted between scenes.

In both cases, the studios did not disclose the use of AI beforehand, leading to a loss of consumer trust in the overall creative process and accusations of deception when the use of AI was discovered.

As consumers become more conscious about the origin of the media they consume, companies must consider audience perceptions and weigh the amount of AI they will accept against the cost benefits.

A 2024 Variety Intelligence survey found that one-third of respondents would be less interested in a film or TV show if they knew it was written using generative AI, while only two in 10 consumers expressed interest in engaging with AI-assisted media.

Respondents were least comfortable with digital replicas of human actors but significantly more likely to accept AI in sound effects and illustrations in animation.

This indicates that the market for AI-generated content is limited, and using generative AI for video or images may not justify the immediate cost benefits.

However, audience preferences are not static – there are signs that evolving consumer awareness of AI will encourage greater acceptance.

A 2024 FTI Delta survey showed that people familiar with generative tools have a more positive view of AI-generated content.

Therefore, film and TV companies should not wholly abandon AI in response to backlash but gradually integrate AI into their value chain and reduce their use of generative AI.

Companies must also be transparent about their use of AI-generated media.

Those that fail to disclose their use of AI risk further reputational damage.

Transparency might take the form of on-screen disclaimers or an industry-standard rating system that informs consumers about the extent of AI used.

Such practices would soften the media speculation and mitigate the potential controversy surrounding AI.

In the meantime, using AI for repetitive backend tasks, such as transcribing interviews, can offer significant cost savings with minimal risk of backlash as they are non-generative and don’t impact creative roles.

GD

Xander Hartley is associate analyst, GlobalData Strategic Intelligence and Costanza Barrai is senior analyst at GlobalData.