From automating time-consuming tasks to aiding creativity, use of the technology is spreading rapidly

When asked what impact artificial intelligence will have on the TV sector, ChatGPT, the generative chatbot famed for its ability to produce natural responses, says: “The TV industry has already started adopting AI in various ways, and this trend is likely to continue and expand in the future.”

It’s not wrong. The global AI market in media and entertainment is expected to hit $104.4bn (£82bn) by 2030, up from $10.3bn (£8bn) in 2021, according to findings from market research consultancy Straits Research.

The rapid advances in the sophistication of generative AI programmes – such as image creators DALL-E and Midjourney, video creation software Lumen 5 and, of course, ChatGPT – over the past couple of years have made them more effective and accessible in all aspects of life, and TV is no exception.

Debates over the opportunities and threats posed by tools that can produce creative output almost indistinguishable from human efforts are bubbling across the industry, but many production companies are already finding uses for AI.

Alex Connock

Alex Connock

Alex Connock, senior fellow at Oxford University’s Saïd Business School and head of the creative business department at the National Film and Television School (NFTS), points to the marketing industry, where AI is ubiquitous – for example, to change the colour or setting of a product in an advert, allowing marketers to advertise different versions of the product in various territories, without the expense of filming multiple adverts.

“The same is going to be true in TV within months,” says Connock, pointing to the dominance of companies such as Netflix, Amazon and Apple, all of which have been harnessing the power of AI for several years.

While mainstream interest in the technology has exploded in recent months, Connock says the first wave of AI use in the TV industry came between 2007 and 2009, when it was introduced for distribution and recommendation systems – the obvious example being Netflix, which held a $1m contest to create an algorithm that would boost the platform’s recommendation system.

The second wave came in 2017, when AI began to be used in production tools across film and TV. Then in 2020, the third iteration arrived – generative AI, which can be used for ideation and content creation.

Problem solving

Robert Dawes, lead research engineer at BBC R&D, has been thinking about how to apply AI to solve production problems since 2017.

One success has been on Springwatch and Winterwatch, which previously used motion detection to help monitor cameras pointed at the entrance to animals’ burrows, or spots they were known to frequent. This system was often triggered by background plants waving in the breeze.

By 2019, Dawes’ team had developed an in-house AI dashboard, based on supervised machine learning, where labelled datasets are used to train the algorithms to recognise and classify different animals.

The AI dashboard logs feeds from the cameras, noting when animals appear. If the crew need to step away from the monitors showing the feeds, the dashboard will monitor what is happening during their absence. It also allows story developers to offload some of the more time-consuming tasks to free up more time for editorial creativity.

“It’s like an extra pair of eyes that you can leave looking at those cameras when you’re not able to,” says Dawes.

Springwatch

BBC R&D developed an in-house AI dashboard for use on Springwatch

Springwatch researcher Jack Baddams says the dashboard’s ability “to crunch vast volumes of data” has been incredibly useful – for example, in identifying which types of birds are the most frequent visitors to a particular feeder, a task that would previously have needed story developers to spend days logging every species captured.

“We don’t have the manpower to do that sort of stuff manually,” Baddams says.

Now Dawes has moved on to Connock’s third wave, generative AI, exploring the potential uses of Neural Radiance Fields (NeRF). These are neural networks trained to create a set of 3D images based on a group of 2D images – essentially, they take in images of a scene and mix them together to generate a synthetic view.

“If I walk around a scene with my phone and take a short video, I can then generate new views where the camera can move around within that area,” says Dawes. “There are potential natural history applications for that, where you could have an animal’s eye view of a scene.”

Look See Wow!

Indie Particle 6 used ChatGPT to suggest toddler-friendly topics for Look See Wow!

Connock is also seeing increasing interest from indies in using AI in the development process, saying that a number of his NFTS students have been hired by production companies launching AI units.

Eline van der Velden, founder of production company Particle 6, has made ChatGPT part of the fabric of her development team. “I tell every single person that works for us, ‘you have to use AI’,” she says. “Get familiar with it just like you would Excel or Word back in the day, and start making it part of your workflow.”

When working on the indie’s latest project for Sky Kids, an ASMR-inspired programme for toddlers entitled Look See Wow!, her team used ChatGPT to suggest toddler-friendly topics and generate a synopsis and template for each episode.

“Every employee should be using AI because it enhances your creativity”
Eline van der Velden, founder, Particle 6

The Particle 6 team would input phrases such as ‘here’s a synopsis, create five more synopses like this, using these subheadings’. It’s still a learning process, she says – AI requires a lot of steering and care with the prompts to ensure useful output.

But, she adds: “In the future, it’ll be like everyone’s assistant. Every single employee everywhere should be using it, because it enhances your creativity.”

Eline van der Velden

Eline van der Velden

She is sceptical that AI will pose a serious risk for writers – a concern that has been raised by the Writers Guild of America as part of the Hollywood writers’ strike. Although the dispute mainly revolves around current pay, the union is also hoping to safeguard against the future use of AI to replace scriptwriters, or to circumvent paid development work.

“Writers just need to learn to use it and become even better writers because of it,” Van de Velden says. “It’s just a tool – when ChatGPT spits something out, if you’re a bad writer, you won’t be able to tell that it’s bad writing and you can’t steer it to become good writing. So you still need good writers to ensure the final piece is well written.”

Connock agrees: “The production world has already been dramatically changed by AI and will continue to be, but it will still require human insight to structure things. And it could be that jobs get more interesting, because you’re doing less rote work, such as pulling through hours of rushes trying to find the right bit, and more considered work like trying to figure out what the real story is.”

Job creation

The World Economic Forum’s The Future of Jobs Report 2020 suggests AI will replace 85 million jobs across the globe by 2025 – but will also create 97 million new ones within the next few years.

It’s not just in development or the edit suite where AI has the capacity to save time. Virtual and LED volume stages have revolutionised big productions such as The Mandalorian, where instead of directors and actors having to simply imagine the setting while filming in front of a green screen, game development engines were used to render vast three dimensional- seeming environments which could be uploaded to the volume stages.

Cuebric is a generative AI system which aims to shortcut this process – a director can use Cuebric’s image generation engine, Stable Diffusion, to generate a high-resolution image from any prompt, and 15 minutes later, have actors walking around in front of it.

It’s the brainchild of the generative AI production company Seyhan Lee and its co-founders Pinar Seyhan Demirdag and Gary Lee Keopke.

“This tool allows filmmakers to move so much faster, see more iterations to their visions,” Keopke says. “And it’s in near real time, instead of having to wait like a week, or three days or a month to get an Unreal Engine drawn up – so it’s great for previsualization.”

Hong Kong’s Fight For Freedom

Hong Kong’s Fight For Freedom used AI to protect the identities of protesters with face doubles

The Massachusetts-based company launched its proof of concept in December last year and plans to bring the tool to mass market this year. But they will have to navigate one of the major barriers to the use of AI in TV – that of copyright.

In order to produce an image, AI needs to be trained using thousands of other images. These may well be copyrighted, and the final image may bear a striking resemblance to that original work.

In February, Getty Images filed a lawsuit in the US against Stability AI the company behind the Stabel Diffusion engine, alleging the company used more than 12 million of its images to create datasets for its image-generating engine without Getty’s permission.

In the UK as it stands, Connock says, a TV show would be compromised in terms of copyright if it used an AI image generated by an engine that had been trained on datasets that weren’t copyright cleared.

A potential solution could be using ‘closed’ versions of ChatGPT and other platforms, where the user can be sure of the copyrights on all the input data.

Again, this is already happening in the ad industry – WWP, one of the world’s largest advertising agencies, has developed its own AI content engine.

“Most decent-sized TV companies should now be thinking about having an in-house generative model,” Connock says.

In-house models could also help organisations take steps towards mitigating AI biases. AI learns from the material it is fed – if that contains biases around gender, race or disability, for example, so will the tool’s output.

Tami Hoffman

Tami Hoffman

ITN director of news distribution and commercial innovation Tami Hoffman says this fear is partly what has prompted the organisation to be proactive in putting tools and procedures in place “to keep safe the future of trustworthy news”.

ITN recently issued guidelines for its staff about AI use, which called for them to maintain editorial independence, remember that AI is a tool and not a substitute for editorial judgement, and to work with the management team when considering its use.

“The editorial guidelines, which we already have in place as a news organisation, are what we lean on to make decisions about how we use AI,” says Hoffman. “Because so much of what we do is about nuance, it’s about judgement, checking and verifying, questioning, looking at assumptions and biases, and making sure that we’re not just relying on one source of information. Those are skills we already have in place.”

“AI frees up more time for the human analysis of turning data into a story”
Tami Hoffman, director of news distribution

Once again, Hoffman points to the value of AI in freeing up resources: “AI can help with data sifting, which is often what makes doing investigations so expensive and time consuming.

“AI frees up more [time for] the human analysis of turning data into a story and thinking about how that story is going to be presented to our audience in a way that engages them.”

THE GROWTH OF AI

Artificial intelligence had already contributed £3.7bn to the UK economy by 2022.

Data published in a report by Next Move Strategy Consulting shows little sign of demand for AI software declining over the next decade, with the global AI market expected to grow exponentially, reaching $1.85trn (£1.45trn) by 2030, up from $95.6bn (£75bn) in 2021.

ChatGPT reports reaching 1 million users in the first five days after launch. By comparison, it took Netflix and Twitter three-and-a-half and two years respectively to reach the same number.

The chatbot market is expected to reach $1.25bn (£1bn) by 2025, a significant increase from 2016, when it stood at $190.8m (£150m).

While concerns about AI-generated ‘fake news’ may be rife among both journalists and audiences, there are cases where its use allows the truth to be told.

The 2022 BBC documentary Hong Kong’s Fight For Freedom used AI to protect the identities of the protesters who were interviewed.

The production team worked with Teus Media and hired Cantonese-speaking face doubles, who were filmed in the same chair, in the same studio and with the same lighting as the protesters. The doubles carried out a series of simple facial expressions and head movements and the AI spent two months learning how to map the faces of the doubles on to the faces of the protesters.

“AI can be a powerful tool for allowing people to speak their truth in a safe way”
Toby Paton, director

“What was really amazing about this method was once the AI had mapped the double’s face on to the protester’s face, every nuance of the protester’s facial expressions was retained,” says director Toby Paton. “That is an extraordinarily powerful tool for communicating human emotions and allowing people to speak their truth in a way that is powerful but also safe for them.”

While AI is certain to be a tech of the future, it is very much a tech of today. “This is the same situation the accountants had in the 1980s, when Excel came out,” Van der Velden says. “It’s just another tool.”

“TV needs a radical creative shake-up. AI will give it some new impetus”
Alex Connock, senior fellow, Saïd Business School

As with any rapidly expanding technology, there are frontiers yet to be explored and challenges to address.

“TV needs a radical creative shake-up,” Connock says. “When I switch on daytime TV or mid-evening TV, I don’t find myself saying, ‘Wow, the zeitgeist is really being shifted’.

“It could really do with some new impetus. And I think AI will be the thing that gives it that.”