Here’s how we can protect the precious progress with have made on diversity from the biases built into artificial intelligence
Generative AI programs, such as ChatGPT and BingGPT, have the potential to reshape the media industry as we know it. For the last few months I have seen headlines and spoken to senior media executives discussing the new technology with both excitement and fear in equal measure. Rarely, however, have they talked about the impact it might have on the important issue of media diversity.
While many of the developments are indeed exciting, opening new creative possibilities and methods of working, I would argue they currently present an existential threat to media diversity and inclusion in general and ‘diversity of thought’ in particular, threatening to push back progress by decades.
Let me explain first by asking three simple questions:
What are the most important events in Winston Churchill’s life?
Who are the 20 most important actors in the 20th century?
Do journalists need to discuss slavery when writing about the US Founding Fathers?
There is no right or wrong way to answer those questions but the way you answer will be heavily influenced by your culture, background, political beliefs and views about the world.
In tests conducted by the Sir Lenny Henry Centre for Media Diversity (LHC), where I work, we found the most popular Generative AI programs, ChatGPT and BingGPT, consistently showed a bias against facts and figures that included people of colour.
For example, in writing a sample answer about Winston Churchill, ChatGPT failed to mention his role in the Bengal famine or his controversial views on race. The same program failed to include a single non-white actor (dismissing the importance of Bollywood altogether) when prompted to list the top 20 actors and omitted to mention the fact the Founding Fathers owned slaves.
Whether you agree or disagree with the answers, what they illustrate is a core problem. ChatGPT and other generative AI programs, “view” the world through a certain prism – one that, I this instance, omits information about slavery but prioritises white actors even across global markets, which are predominantly non-white.
Of course, we should not be surprised by this point. Generative AI tools work by processing large quantities of existing source materials – which can vary from news articles to TV programmes and podcasts to academic journals. Yet, it is commonly acknowledged that existing global and British academia, journalism and media suffers from an over-representation of white men, especially historically. They lack diversity.
Over the last ten to twenty years the importance of diversity and inclusion has become so widely accepted in mainstream media circles as to almost be a truism. But sometimes it is useful to remind ourselves why.
I fundamentally believe media diversity is critical to creating a better society - one that is at peace with itself, doesn’t scapegoat one group and is at peace with itself.
Television and the media impacts us on an emotional level, enabling us to walk in the shoes of another person and understand their reality. The news tells us what to care about and drama tells us why. Both reframe how we see the world. Media provides narratives which determine how we understand the world.
So we need true diversity across our media and drama.
AI needs disabled people telling their stories so ChatGPT can understand and explain the world from their perspective.
AI needs women to have equal access to writers rooms so BingGPT can see the world and write authentic generative scripts through their eyes.
And AI needs people of colour to be able to create dramas and comedies so generative dramas can bring out all our joys and pains.
All this means is that currently generative AI will produce drama scripts, produce news items through a predominantly “white male Western gaze”. A short news item about Winston Churchill’s life may omit the Bengal famine, and a drama script may only write roles for white heterosexual men.
Irrespective of how many people from different backgrounds are employed in the media industry (and the industry still has a long way to go) the use of generative A.I. threatens to undermine the multiplicity of perspectives which is fundamental to understanding and representing the complex society that is Britain and the rest of the world today.
According to a survey by the World Association of News Publishers, already currently half of all newsrooms use AI tools. There are not yet similar figures for other parts of media but anecdotally its use is increasing from supplementing drama scripts to writing short stories.
Therefore, the LHC does not believe it is realistic to ban its use. But we do believe we can improve how we use it. So we have produced a set of simple diversity guidelines that people working in the media should consider when using these tools. The guidlines are currently focused on journalists, but can be easily expanded.
These guidelines include:
- Journalists should be aware of built-in bias
- Newsrooms should be transparent to their readers when they use AI
- Journalists should actively build in diversity into the questions they ask the AI programs
- Journalists should report mistakes and biases they spot
- Newsrooms should never treat the answers provided by AI programs as definitive facts
The use of generative AI programs is an exciting development and the programs themselves are evolving at an unprecedented pace. For these reasons we hope these guidelines are just a start of how we need to think about AI and diversity. We also hope that they prompt action now - because we cannot afford to try to rejig the programs to account for diversity and inclusion once media practices around AI are already embedded in how we work. We’ve got to act now.
(The full set of guidelines and the work of the Sir Lenny Henry Centre for Media Diversity are available here)
- Marcus Ryder is head of external consultancies for theSir Lenny Henry Centre for Media Diversity
No comments yet