Benjamin Field (producer of Virtually Parkinson) explains why he’s apprehensive about the UK government’s consultation on copyright and AI

AI production

As a UK broadcast producer and director who has engaged deeply with artificial intelligence within our creative industries, I’ve seen first-hand how we stand at a crucial intersection of ethics, innovation, and legal frameworks.

My work, which ranges from pioneering AI-integrated projects like Gerry Anderson: A Life Uncharted, and Virtually Parkinson, to engaging regularly with industry bodies – has shown me just how vital it is to balance and integrate technological possibility with human authorship and cultural integrity.

Recently, I had the opportunity to give evidence to a Select Committee reviewing British film and high-end TV, where I was able to share insights gleaned from both practical experience and ongoing policy discussions.

At the same time, as someone deeply engaged with artificial intelligence in the creative sphere, I’ve encountered firsthand how cutting-edge innovation must be balanced against respect for human creativity and legal standards.

This is not an abstract issue, it’s one I’ve grappled with in practice, from the projects mentioned above to exploring the boundaries of digital likeness in new formats.

During my Select Committee appearance, I discussed these challenges in light of the ongoing UK government consultation on copyright and AI.

The consultation acknowledges AI’s potential but also highlights complex questions that demand careful consideration.

One of the most pressing issues I raised with the Select Committee is the use of digital likeness—an area still wide open to ethical, legal, and commercial interpretation. The ability to recreate someone’s image or voice through AI can be groundbreaking in terms of storytelling, but it also poses serious questions about consent, legacy, and remuneration.

During the hearing, I illustrated the complexity of this issue: “In the UK, you can do anything. That doesn’t necessarily feel right.”

This comment underscores the urgent need to address the rights of both living and deceased individuals when it comes to their digital representations.

Without clear legislation, producers might find themselves in murky ethical territory—allowed by law to recreate likenesses without explicit consent or compensation.

This is precisely where the government’s consultation can offer clarity, ensuring creators and performers retain control over how AI is used to represent them, their work, or their identity.

My own project, Virtually Parkinson, serves as a tangible example of how to approach digital likeness responsibly. In this podcast, I sought to recreate the iconic voice and presence of the late Sir Michael Parkinson using licensed data and AI tools.

By adhering strictly to ethical and legal frameworks—securing proper permissions, respecting the subject’s legacy, and maintaining full transparency—I aimed to demonstrate that it’s possible to leverage AI’s creative potential without crossing moral lines.

Audience reactions to Virtually Parkinson highlight that honesty, disclosure, and respect for the individual’s legacy can mitigate initial scepticism.

This experience supports the idea that if industry guidelines and certification schemes—like those currently under discussion—were backed by clear legal standards, producers could more confidently utilise AI while upholding trust and integrity.

Industry bodies like BAFTA, PACT, Equity, and the Writers’ Guild of Great Britain, along with other stakeholders, are happy to look at certification frameworks—akin to the Albert scheme in sustainability—that ensure AI tools and datasets are used responsibly.

These voluntary standards, proposed by me, emphasise human control, consent, and the fair licensing of training data.

However, these measures can only go so far without legislative reinforcement.

We must also remember that audiences expect authenticity, and transparency will be crucial.

Whether it’s through watermarking, disclosures, or industry-wide certification, building viewer trust is paramount.

When I testified before the Committee, I underscored the principle that just because we can do something with AI doesn’t mean we automatically should—ethical considerations must guide our actions.

As I said, “If something can be done, does it mean it should be done? Are we going to harm our colleagues with our actions?”

This reflects the broader sentiment echoed by many in the industry – just because AI makes something possible doesn’t mean it is ethically or artistically desirable.

The UK’s film and TV sectors have long enjoyed global renown for their craft and cultural resonance. By enacting legislation that respects international standards and supports our ethical frameworks, we can retain that prestigious position.

Clear legal guidelines will enable AI-driven innovation that augments, rather than overrides, human creativity. This approach will not only safeguard creators’ livelihoods but also foster an environment where audiences trust the authenticity and integrity of the content they enjoy.

Screenshot 2024-12-20 at 10.47.49

Benjamin Field is a TV producer/director, ethical AI expert and policy writer