Benjamin Field of Deep Fusion Films on why we need a workable system that indicates how AI has been used and where human oversight occurred

Robot producing content (AI generated)

There’s a persistent and widening disconnect between what buyers expect AI to do in unscripted content, and what it should be doing. This isn’t just a teething issue in a nascent industry, it’s becoming an obstacle to both innovation and integrity.

We’re in an era where a tool like generative AI, which can genuinely reshape how we work, is misunderstood in two opposing directions. It’s feared as a corruptor of truth or fetishised as a one-click creativity engine. Neither is accurate.

And the confusion, if left unchecked, will damage the very thing we’re trying to build, programming that is transparent, credible, compelling and commercially viable.

My preference is always authenticity at the centre of how we work with AI. Not as a buzzword, but as a production principle.

Because in the world of unscripted content, truth isn’t just a virtue, it’s the format.

The problem is best seen in the pitch meetings. AI visuals! Real-time avatars! Fully synthetic scenes! The temptation is clear – lower cost, faster turnaround, more spectacle. And technically, much of it is possible. But should it be done? And more importantly, is it the right kind of storytelling?

Recently, I was asked to create something completely inauthentic for a TV commission. The brief wasn’t about creative experimentation or ethical reconstruction, it was about spectacle for spectacle’s sake. A visual gimmick that had the potential to mislead audiences about what was real and what was not. I refused.

That refusal might cost us the commission, but I stand by it. Because if audiences begin to mistrust what they’re watching, we risk undermining the entire value of the unscripted genre. Trust is the contract, and if we violate it, even once, we lose our footing.

This is not about being anti-AI. Quite the opposite. It’s about using AI properly, with clarity, legality, and creative purpose.

Let’s be clear: generative AI is incredibly powerful, particularly when used for what it’s actually good at – processing, interpreting, and sculpting data into usable outputs.

We’ve seen remarkable results using AI to speed up workflows, to iterate design, to localise content, to translate accurately, and to enhance visuals. But the best results come when AI is grounded in real input.

My preference for Deep Fusion has always been that we shouldn’t start our creative process with prompts and hope for the best.

We start with reality. Real archive. Real footage. Real data sets. And then we build from there, using AI as a synthesiser, not a fantasist. 

That’s how we achieve something that matters more than realism – authenticity.

Truth embedded in the process. Provenance traceable. This is not ‘AI slop’, it’s ethical, generative media with purpose.

This gives commissioners, and their insurers, confidence. And it gives the audience something even more valuable, which is honesty.

Much of the current industry focus is on how creatives need to adapt to AI, how to use it responsibly, how to integrate it into workflows, how to avoid crossing ethical lines. And that’s important.

But there’s a missing part of the conversation about how buyers and commissioners want to work with AI.

What’s acceptable? What’s encouraged? What level of augmentation is aligned with a network’s editorial values? What are the expectations around transparency, not just to meet regulatory standards, but to protect audience trust?

For example, the BBC has held the position that BBC News and Current Affairs will be ringfenced from any AI use, however BBC News also just hired a head of generative AI. Is it any wonder that the market is left confused by what’s allowed and not allowed with so much mixed messaging and a lack of public information!?

Without clear answers to these questions, we’re left with a vacuum. Creators are left guessing. Some over-deliver synthetic polish. Others under-use AI entirely, afraid that any generative material might derail the project.

Buyers have a responsibility, not just to understand what AI can do, but to help define what they want it to do on their platforms.

This commitment to transparency is why Article 50 of the EU AI Act should be a good thing. In principle, it is. Article 50 mandates that AI-generated content must be flagged to audiences, in the first frame.

That’s great in theory. But in practice? The current lack of guidance around implementation is already creating hesitancy in the market. Producers are paralysed, worried that the mere act of flagging will cause audiences to switch off, to disbelieve the content entirely, or to question the legitimacy of what they’re watching.

That’s not paranoia, it’s already happening.

We need nuance. There’s a vast difference between AI used to reimagine a GFX plate and AI used to invent an entire synthetic presenter. Both might require transparency, but they should not be treated the same.

We need a tiered system. One that supports innovation while protecting truth. One that encourages producers to be transparent without fearing commercial suicide. And we need it fast, before Article 50 starts causing more damage than good.

At Deep Fusion, we’re already working toward this – a visual tagging framework that clearly identifies how AI was used and where human oversight occurred.

Not just for compliance, for audience trust. And it’s important to gain that trust because the sector is changing rapidly, the traditional commissioning model is dying and we’re on the verge of a new era.

One of the core reasons AI is so appealing to unscripted buyers is money. Digital-first content has never found a consistently profitable model, partly because the production costs still mirror traditional pipelines, while the revenues do not.

AI helps here, but only when it’s used correctly. We’ve designed a workflow where producers can make premium-looking, audience-worthy content at a fraction of the cost, because the AI tools we use are legally signposted, insured, and ethically embedded from day one.

We’re at a pivotal moment in how AI integrates into factual and unscripted production. There are three things that need to happen now – real education for buyers and commissioners; transparent standards for use and labelling; and authenticity as a strategic choice.

AI isn’t going anywhere. Nor should it. I believe AI is a vital part of the future of unscripted. But only when used in ways that preserve the foundational values of the genre – truth, trust, and audience connection.

Ben Field

Benjamin Field is CEO at Deep Fusion Films

Topics