Jane Barrett, Reuters’ global editor for media news strategy, was speaking at the London AI Summit

Jane Barrett Reuters

Reuters is using several AI tools in its newsrooms, global editor for media news strategy Jane Barrett has revealed.

The company has been using, “some form”, of artificial intelligence for the past decade, with economic data coverage and area that has been, “pretty automated,” for a significant time. However, Barrett noted that the latest changes are going to have an, “exponential”, effect.

Speaking at the London AI Summit on 12 June, Barrett said: “We rolled out last year a set of AI features for [video] editing. Which is really cumbersome, it takes a lot of time to wrangle it into decent shape. So, we launched a tool called Avista, which has three main AI tools in it.

“One is transcription. Let’s say the European elections this weekend. You have so many people on podiums speaking around Europe, and Avista transcribed every single one of those things, and then automatically translated them into English or Spanish. So, for an English producer or a Spanish producer you’ve automatically got everything that was said translated into your own language, and entirely time coded. So you could find the moment [French president Emmanuel Macron] calls the election. You’ve got your sound bite straight away.”

In addition, “Every time the shot changes, it will tell you – which is a boring piece of work when you’re doing it manually.

“And the third thing is facial recognition technology. Only on well-known people, you can start spotting people in a crowd, or when someone starts walking up onto the podium, or whatever it is.” 

AI is also causing new problems for journalists, with deepfakes a particular worry. Barrett said: “We were very focused on deepfakes in videos at the beginning of the year, but actually what we found is it was audio we needed to worry about much, much more. We’ve seen it in several elections already, in the US election, Eastern Europe – there was one in Hungary, where there were fake audio tapes going around of, ‘here is the prime minister talking to a journalist about this marginalised community and how they’re going to be manipulated.’

“If you see something, there’s something in that amazing neural network in your brain that says that’s for something a little bit off, but we’re so used to listening to things as we walk about that are a little distorted.

“First of all, we will put things through any of these tools that we’re trying out that might show us a level of manipulation.

“We will then get in touch with the people whose voice it is, or whose picture it is, or who are being slandered in the piece, and ask them, ‘this is going round, is it you?’…which is the type of thing ChatGPT can’t do, yet.”

Going forward, Barrett believes consumption of news could also change, giving an example of AI-curated news for individual consumers. For those working in the industry, “We are all going to have to work out as teams and individually, “what can I offload onto my AI assistant, my AI program, and how can I add on that unique human layer?”

She added: “There’s going to be new roles – there’s now somebody constantly looking through all the alerts, and marking the ones that were inaccurate or irrelevant so that can feed back and train the model.”