James Pollock, creative technologist at VFX house Lux Aeterna, looks at when and when not to use automation

lux_aeterna_studio_image

Media hype and existential dread aside, AI is nothing new. Actually, machine learning processes have seen deployment across industries for years. This rings particularly true in the visual effects industry, where studios like Lux Aeterna have always sought to embrace and innovate with new technologies.

So it’s no surprise that the developments we’ve seen in generative AI have caught the eye of VFX artists everywhere, including at Lux Aeterna. There’s something very powerful about watching a photoreal image or video manifest from nothing in a matter of seconds, GenAI seemingly bringing life to a simple text prompt. And of course, we’ve been seeing these technologies themselves develop in a similarly rapid way over the past few years.

However, these technologies are far from unproblematic. From the myriad copyright lawsuits AI firms face, to non-consensual deepfakes, to fake news. Now more than ever, it’s important to take a step back from the hype and controversy, to evaluate the AI landscape on your terms and with your own priorities in mind. 

Where do we start with AI implementation?

The first thing to consider is how practical it is to use a given tool or process. After all, there’s not much point digging into the deeper issues if the method isn’t up to scratch, and both Lux Aeterna and our clients pride themselves on our exceptionally high standards, and we need tools that can deliver.

It’s worth remembering that while many of these tools have been designed to make the most of a powerful underlying technology, they haven’t been designed for VFX practitioners, or with the needs of the VFX industry in mind. That means contending with limited colour depth and resolution, inefficient workflows, limited render information, and an overall lack of fine control. While it might seem promising to utilise a chain of different AI models to overcome each other’s limitations, you can end up beholden to a series of black box processes that you can’t interrogate or dial in.

Something I’ve been enjoying experimenting with is Gaussian splatting, largely because of how far it’s come in recent years. You can now take a video or series of photos and turn them into high-quality, production-ready assets. Gaussian splatting toolkits such as CG Nomads’ GSOPs for Houdini integrate directly into the industry-standard VFX software we use every day, allowing us to use this new kind of asset within established workflows and opening up a huge range of creative possibilities. That kind of compatibility is always a plus when finding new tools. 

We’ve been experimenting with tools like PostShot for training and editing Gaussian splat models. When I’ve introduced it to colleagues, they’ve been surprised by how quickly they pick it up. We’re starting to have really interesting conversations with clients about the technology too.

How do you know you’re using ethical technology?

I’m no ethicist, and neither are many of the people working in VFX, but that shouldn’t stop us from thinking critically and acting responsibly. The first consideration: your main priority is the client. If they have an AI protocol, make sure you follow it, and if not, ask them to review your plans. Explain the tools you’re planning to use, let them know how they work, and be upfront about the cons as well as the pros. This should be a team effort, with your producers and IT managers. 

Also, consider if the tool is cloud-based. Believe it or not, clients aren’t typically on board with you uploading their project materials to random third parties, and you will have probably signed an agreement stating that you won’t. Conversely, if you dig into the terms and conditions of these cloud-based services, you’ll sometimes find language giving the operators permission to do all kinds of things with uploaded content, including using it to train their own AI models.

Leaks, cyberattacks, and other breaches have become all too common in the film and TV industry, so security is a top priority for us. At Lux Aeterna, we’re constantly reviewing our security policies and making sure they are robust and compliant with client expectations.

The greatest assets a studio has are its relationships and reputation, far more than any technology. Unfortunately, we’ve heard of studios getting into difficult situations with their clients over the use of AI, but with a bit of research, it’s completely avoidable.

When can you bring an AI tool into production?

Your tools must work for the project, the artists, and the client, so the AI software you bring into production should really be on a case-by-case basis. R&D for new tools involves a deep investigation into the bespoke needs of each project and the creation of a package of creative and technical approaches.  

The right time to introduce AI is when you can trust it to fill its role reliably within that package, and that’s where AI is different from other tools. Studios aren’t typically asked to think about the ethics of one file format over another, or whether choosing one compositing tool over another could create issues for client relations. However, AI often raises those bigger questions. It’s not just about whether it works technically; it’s about whether it fits ethically, creatively, and practically within the workflow.

Most studios are well-equipped to navigate those conversations. The key is understanding where AI genuinely adds value, ensuring its use aligns with your values, and being transparent with both your team and your clients.

How will AI affect the future of VFX?

As a creative technologist at Lux Aeterna, my role is heavily focussed on VFX R&D. I’m also involved in MyWorld, a creative technology research programme promoting investigation and innovation with cutting-edge technologies. These experiences make it easier for me to distinguish between AI media hype and tools that are likely to have a lasting impact on the creative production community.

With that being said, the noise around generative models from ChatGPT to Runway often distracts from the truly effective AI tools that have long been evolving in our sector. The rise of generative AI has been incredible to watch, but it’s still a technology that’s trying to find its place in the world and within the VFX industry. It shouldn’t be surprising that, whether AI-driven or not, we keep returning to the tools that were created first and foremost to meet the needs of VFX artists.

James Pollock Lux Aeterna

James Pollock is a creative technologist at Lux Aeterna

Topics