At the AI Creative Summit, execs from the BBC, ITN and Channel 4 discussed moving from proofs of concept to integrating the technology into their working practices

GRACE BOSWOOD (director of technology, Channel 4) The Generative AI [GenAI] revolution has focused the mind. We are doing activity in three areas. First, we’re creating policies for how we use GenAI. We believe it is there to work in the service of creators, not to replace them, so we’re looking for optimisations around how we create content, not a complete revolution in how it’s created.

It would be naive to think this won’t be really useful in allowing us to do more with less. It’s a difficult market out there and we have to think about how we can deliver our business metrics more effectively.

Grace Boswood and Danijela Horak

Grace Boswood and Danijela Horak

One example is around personalisation and content creation. There is an infinite opportunity to personalise with streaming and social media, but we don’t have an infinite number of people creating clips or good visuals, so it’s about how you can experiment with AI to augment those processes.

Second, we are working out how we can use Channel 4’s unique attributes to bring something different to the market. We’re working with a company called Charismatic AI, as part of a consortium that also includes Aardman and some academics, to research what the future of content creation might be.

Third, we’re looking at how we can lower the barriers to entry and how we can get a diverse talent pool using these technologies. From that, magic may happen. It’s a research project, and I really hope that something will very quickly come off the back of it.

DANIJELA HORAK (head of AI research, BBC R&D) AI and data science has long been the bread and butter of what the BBC does – in our recommendation system, personalisation and data analytics.

When it comes to GenAI, there are initiatives spanning from efficiency workflow tools to ambitious R&D projects. Very early on, we realised that if you want to use commercial tools to, for example, generate text, it is very limited, and the style and tone of voice and vocabulary is monolithic. It’s not promoting diversity of expression.

“We found there is a distinct difference when you use your own content to adapt these tools to your own use cases”
Danijela Horak, BBC

We thought if we were ever to use something like this, it would have to embody the BBC’s values, style and tone of voice. So we started exploring how easy it is to recognise commercial models in written text, compared with the BBC’s output.

Then we looked at what you can achieve by training these models on millions of BBC articles. We found there is a distinct difference when you use your own content to adapt these tools to your own use cases.

54122661866_d279951f31_o

The BBC has a reporting service where 400 local journalists provide copy to us that would typically be rewritten by our journalists. Due to resource constraint, we can only rewrite about 10% of that content, so we embarked on a project to see if we could create an automated system or training model to take that copy and rewrite it how a BBC journalist would. It would then be passed to a subeditor so there is a human in the loop.

We are very happy with the initial results. It’s still undergoing a trial and hopefully it’s going to be deployed in the first quarter of next year.

Detecting deepfakes

We also do a lot of research in deepfake detection. There are many tools out there already but in the newsroom, you can’t suggest tools for a journalist that are, for example, 80% accurate. What happens in the other 20% of cases? How do you build trust? So we’re building our own explainable deepfake detection tools, in collaboration with University of Oxford, and expect the first results fairly soon.

For us, it’s about explainability. You have to be able to say, ‘The model says this is a fake photo or a video and this is the rationale behind that decision.’ Then a user can assess whether that makes sense or do further investigation to validate the model’s outputs.

JON ROBERTS (director of technology, production & innovation, ITN) Trust is core to our business, so we’re naturally cautious about synthetic content generation. Like everybody else, we’re running proofs of concept, from voiceover to animation and everything in between.

Jon Roberts

Jon Roberts (right)

We formed cross-company working groups last year to try to surface where the real opportunities, risks and threats are. We’re looking at tools to better help us tag, protect, manage and exploit our archive. There’s also a lot we’re doing with the less sexy stuff – around spreadsheets and data management.

Alongside all of that, we are actively engaged in some of the industry conversations around IP copyright protection and how that might play into policy thinking. In terms of production tools, one of the fi rst things we built was a chatbot with our HR policy as a sort of companion for life at ITN. It will ultimately grow into a co-pilot for understanding all the systems you need to be able to succeed at ITN.

We’ve also built our own transcription platform – it’s an interesting example of how AI is allowing us to look again at building ourselves versus off -the-shelf solutions. There’s real value in something that’s highly optimised for the way we work.

This year, through an IBC Accelerator initiative, we also got to work with the BBC and others on a proof of concept that connects a trained AI agent to our running order data and other production data to see whether it can thrive in a really challenging live production environment. The fundamental concept of a trained model with access to our data and more control of our systems feels really interesting.

“I don’t think any of us should underestimate the challenge of going from experiment to full-scale deployment”
Grace Boswood, Channel 4

GB There’s something quite profound there isn’t there? About how you take this from successful experiment to full production, and how you take cutting-edge AI technology and integrate it into software systems, which can be quite old and clunky, and not really fit for purpose.

Some of our scheduling systems and data systems probably haven’t had all the investment they need. The optimist in me says, ‘Well, there’s opportunity for a step change, to take those legacy systems and go three or four steps ahead’. But I don’t think any of us should underestimate the challenge of going from experiment to full-scale deployment.

54122663381_779db61ab4_k

JR I agree 100%. Moving from interesting proofs of concept that we see at a trade show into things that feel like they’re delivering value – there’s a real gap there. However, the most valuable way we’ve found to reskill as a tech team is to start trying to build these things. You can see how some of the skills in building a fairly simple chatbot are ultimately at the heart of some of the more complex things we’re trying to do.

GB We do have naysayers that don’t want to touch this thing either, because they are scared of it technically, or because of the issues around jobs and so on. But we’re clear about saying this is not about replacing, it’s about augmentation and efficiency, so that you can do more of the fun things in your job.