Benjamin Field, CEO of Deep Fusion Films, looks at the latest legal battle over AI and what it means

The UK Getty ruling in its case against Stability AI has been read by many as confirmation that AI training, at least when it happens abroad, isn’t automatically a breach of copyright but it’s also a sharp reminder that trademarks still bite, especially when your model starts spitting out someone else’s logo.
Getty had accused the AI business of scraping 12 million images to train its model, Stable Diffusion. One of the first AI cases of its kind in the UK, it found that Getty’s trademark had been infringed, but lost on other copyright-related issues, with Stability AI making the argument that the case doesn’t belong in the UK because the model was trained outside of the country.
For producers, the headline is this: courts are starting to treat model providers as accountable for what their systems output, and producers remain accountable for what they publish.
The bit that seems the most important thing to come out of this is the US training vs UK use.
Off the back of yesterday’s verdict, to me, it looks like the land lies like this:
Training is governed by the law of the place where the training happens. If a model is trained in the United States and a US court says that training qualifies as fair use, that act is lawful in the US. That can remain true even if the dataset included UK works.
UK law kicks in at the point of use. When that same ‘fair use’ model is used in the UK, the legal frame switches to UK copyright and trademark law. Importing or hosting model weights isn’t, by itself, a copy of the training works based on the current ruling. The line is drawn at output. In plain English - if it’s legal to use in the US then it’s legal to use in the UK. However UK copyright law applies to whatever that model produces.
So yes: a US-trained model can be lawful to use in principle in the UK, right up until the moment an output infringes a UK rightsholder. That’s the current line in the sand (just don’t mistake it for solid ground).
Think of it like a painting. You can buy it and import it from anywhere, but once the painting hangs in a UK gallery, UK rules apply to what’s on the canvas.
So, what’s changed for producers?
It feels like developers can’t dodge responsibility by saying, “It was the user’s fault.” The Getty case confirmed that when an AI system produces outputs containing a trademark, even accidentally, that counts as use in the ‘course of trade’ by the developer, not by the end user. In other words, the company behind the model is legally responsible for what the model can generate. Users shouldn’t be held responsible for infringing IP merely because the AI model generated it, however producers would be responsible for the commercial use of that output.
For producers, this isn’t some exotic new legal frontier. It means generative AI now sits in the same world as every other production asset, subject to the same checks, paperwork, and chain-of-custody thinking we already apply to footage, archive, and music. The novelty has gone; we’re back to rights, contracts, and audit trails.
Practical guardrails for production include vendor accountability first; treating outputs as editorial assets, not magic; that territory matters again; you should pre-clear talent and brands; and make sure you have insurance alignment.
Generative AI will be useful in television, but not because prompts replace people. It’ll be useful because it’ll operate like every other part of the supply chain: licensed in, logged through, cleared out. Creativity gets to be bold when compliance is boring. That’s not the revolution AI developers tried to promise but it may be the evolution the industry actually needs.
If you only remember 3 things, they should be:
- Training venue sets the legality of training. Publishing venue sets the legality of output.
- Developers own the training story. Producers own the publishing story.
- Contracts, not vibes, decide whether your series is safe.

Benjamin Field is CEO of Deep Fusion
No comments yet