The technology has upsides and downsides, but human insight and sensibility cannot be easily replicated

More than a year ago, the chief executive of one of the private Silicon Valley companies that run massive AI projects for what they politely call ‘three letter agencies’ made two predictions to me. These have tuned out to be bang on.

First, he said, the datasets that AI firms were trawling were so vast that they were likely to run out of internet to ingest for training their AI models.

Second, these firms would have to split the world’s information into pre-and post-2021 datasets, and material from 2020 and earlier would rapidly become more valuable.

The reason that old data would suddenly become so important, he said, was known as ‘dogfooding’ – essentially the idea that new data found on the internet in 2023 and beyond will be at serious risk of being synthetic, itself created by AI. And training new AI models on synthetic data has been found to weaken their power.

Today, both his predictions have come true. It is indeed the case that large language models including ChatGPT, which become more powerful and sophisticated the more information they ingest, are reaching the limits of the internet. (As of this August, you can now also get a line of computer code from Open AI to actually prevent your site being used to train GPT5.)

“There is something about human content that is unique and special. Truly synthetic content is just not as useful”

Second, dogfooding has become a real problem, to the point that researchers have identified weaknesses in the outputs of large language models in recent months: papers written with AI are not as good as they were three months ago. This is made even worse when humans employed to fine-tune AI models via supervised learning actually trick the system by using AI tools as a shortcut.

It seems that from the perspective of AI itself, there is something about human content that is unique and special. Truly synthetic content is just not as useful.

All this should be a point of existential reassurance to people who work in the TV industry. Contrary to the rather dystopian propositions made by the more vociferous striking Hollywood writers, AI is not going to take the place of the really creative jobs any time soon.

Yes, Emad Mostaque, chief executive of Stability AI, which runs Stable Diffusion, has claimed he could create an entire HD series of Game Of Thrones from a single verbal prompt within the next five years. But the real question is, would it be any good?

Your human insight and sensibility cannot currently be replicated by machines, and may never be. Human beings respond uniquely to content created by humans.

That’s not to say AI is completely risk-free for TV. There are huge issues around the impact of legacy bias in datasets feeding through to outputs today – not least around ethnicity and gender (check out the brilliant work of Joy Buolamwini at MIT on that). Copyright is a major challenge for TV companies, which, according to Mostaque, will all imminently need their own custom generative models.

But there are AI upsides. AI will turn video editing into a vast data science project with unprecedented immediacy and insight into the footage. AI in Unreal Engine and other technologies enables a whole new generation of studio uses. AI can help with music, voice (for good and ill) and almost every other stage of the production process.

Alex Connock hi res

These are all points of reassurance and concern about AI that I will be exploring in a session on Wednesday 23 August at the Edinburgh TV Festival with my colleagues Hannah Fry and Muslim Alim. But there is a common thread: our own, AI-augmented intelligence.

That intelligence, that human agency, has always been vital in TV. It’s what made Friends, Happy Valley and even Black Mirror. In the future, we will matter more than ever.

  • Alex Connock is senior fellow at University of Oxford and author of Media Management and Artificial Intelligence (Routledge)