The impressions of VFX, post-production and production professionals on the potentially game-changing text-to-video platform

Screenshot 2024-03-05 at 12.49.18

OpenAI’s Sora has caused quite a stir among content creators and the industry at large.

There’s been something of a collective intake of breath as the industry struggles to digest a series of videos showcasing the impressive abilities of the GenAI platform.

The text-to-video model appears to have catapulted AI-crafted video to the next level, producing realistic looking content that can seemingly bring to life any ideas you can think of.

Put in your prompts and out comes a picture-perfect visualisation.

Sora creates videos up to one minute, featuring highly detailed scenes, complex camera motion, and “multiple characters with vibrant emotions,” says OpenAI.

Watch the video below to see examples of videos created by Sora.

The images used throughout this feature (except for the photos of the contributors!) are all screen grabs from Sora-created footage.

Ludo Fealy

Ludo Fealy, founding partner, nineteentwenty VFX

Like most things AI, the initial reaction is one of amazement quickly followed by paranoia mixed with fear. Sora is seriously impressive, but I do think it is a little way off from being completely threatening. Currently, Sora is very much like doing a stock footage search. The images it produces are based on prompts that you give it but, as far as I am aware, you can’t refine the results enough to make it so bespoke that you could create a narrative of any useful length. Even if it could, you have to ask, why would you want to do that?

In the main, we shouldn’t fear the technology but see it as another tool for us to use. If someone had shown me the tools available today 15 years ago, my heart might have sunk and I might have thought that it was game over. Instead they have become an indispensable part of our toolkit, improving our workflow.

There is also a much bigger, existential conversation to be had. We’ll save that for another day. I’ll finish by saying that no amount of code or computing power will ever replace the expression, spontaneity and decision making powered by the creative eye of the human.

Dave - Finish Line

Dave Stephenson, finishing artist / head of picture, The Finish Line

It seems like every couple of months there’s a new Thing that we all need to panic, throw our arms up in the air and run for the hills about because this… this is the final nail in the coffin of film, television and the industry as we know it. Commercial television, Sky, “You wouldn’t steal a car”, digital vs film, bittorrent, streaming, closing down channels, opening up channels to name a few. And now comes machine learning.

Does Sora shift human input to an increasingly more redundant role in content creation? No. In its current form it’s hugely limited. It’s not The Thing.

But it does point the way in a couple of key areas.

Firstly, it gives people who would never normally be able to access the funding or skills to do so, the tools to create animatics, pitches or rudimentary shorts that bring their ideas to life. Think how many music videos this is going to generate. It offers a means of expression to a huge range of people to try things out in their own time and space, without the pressure of time or money.

Who knows what opportunities that might arise through the democratisation of technology that this will provide?

The industry is still hugely under representative of diversity, so anything that opens doors is welcome.

Second, in order to get anything out of it, you have to have humans putting instructions into it. That’s an incredibly specific skill. At the moment it’s a technical one, almost akin to programming, but I can imagine it turning into a hybrid tech / artistic role, much as the finishing artists at The Finish Line are exceptional colourists, VFX artists and online editors all in one package.

There are important decisions to be made with regard to camera lens, focus, film stock (potentially), exposure, actor’s performance, lighting, all the “happy accidents” of weather, breath and coincidence that will need to be artificially added by humans. If you’re on a film set and you don’t like something you can say “Move that chair please,” or “Can you try that a little slower?” Trying to get that from Sora is far more difficult.

I think the last point to make is that change is always viewed as doom and gloom rather than opportunity. On one hand I get it, everyone’s got their sunk costs and legacy infrastructures, but change is constant and inevitable and we owe it to the next generations to embrace it so that they have as much opportunity as possible to work in an industry that’s built for the future rather than one that’s built for the past.

Screenshot 2024-03-05 at 12.50.30

Alex Meade new

Alex Meade, operational managing director, Fifty Fifty

On first look the results created by Sora are incredible. Seeing moving photorealistic human beings appear out of thin air from a text prompt is startling and shows what an incredible upward curve this technology is on.

There are all sorts of questions raised on what is being achieved at this stage from the ability to direct and create iterations of videos, the lack of lip-sync, the quality of images in terms of colour space bit depth, the control in terms of framing and angles as well as images not being layered.

There are also the questions to the potential cost and time it takes to process, to the legal issues behind the data. But we have seen such rapid progress in this technology that we must assume that these questions are being answered and will see a version with these questions answered widely available soon.

What are the connotations for our industry? You can see immediate use cases such as for storyboarding and creation of GVs and no doubt we’ll see some clever uses to create entire content is various forms.

However, will it replace the human ability to create, edit and tell a story? I don’t think it will.

Will it enhance creativity, offer a complementary tool for us to use, perhaps offer some costs efficiencies? That seems likely. Overall, I see it as a tool to be embraced and used responsibly with it being down to us to adapt and change to the use of new technology as is ever the case.

Martin Gent

Martin Gent, creative director, Buska Video

The announcement of Sora has undoubtedly sent shockwaves across the media industry and far beyond. It’s been a real wake-up call for many who had been watching the development of AI video from the sidelines until now. As someone who offers AI training and advises on the use of generative AI in creative workflows, I’ve noticed a sudden spike in enquiries since OpenAI’s announcement.

Sora has cut through in a way that no other AI tool since the launch of ChatGPT has. It represents such a dramatic leap forward in AI video generation that it can’t be ignored or easily dismissed. It seems we’ve entered a new era, where established media companies are starting to recognise the risks of being left behind if they don’t start engaging with this incredible new technology, rather than focussing on the challenges, both legally and ethically, that AI presents.

As for how Sora will impact production when it is finally released, I believe it will depend to a large extent on how controllable it is when used in text-to-video or image-to-video mode. Will we be able to direct the camera and movement within the frame? Will Sora be able to render the same scene from multiple angles with consistent characters? Some of the early tests suggest it might be able to do this, but I suspect there will still be a large element of chance at play, meaning it won’t be able to fully replace traditional filming for some time.

With all the excitement around Sora’s text-to-video capabilities, what many have overlooked is that Sora can also take video as an input and then add to that via text prompts. This could prove to be a very powerful combination, offering all the control of traditional filming, but with a powerful layer of AI enhancement on top. This will undoubtedly be of concern to anyone who works in the VFX industry, but could impact on many other departments too.

I believe the first to be impacted by Sora’s release will be the stock video libraries, as has been the case with AI image generation. Then we’ll start seeing Sora being used to generate things like establishing shots and GVs.

Sora’s incredible ability to blend two videos together will be exploited to the max to make eye-catching music videos and commercials – until it becomes so overused that no one goes near it.

But what I’m most excited to see is not what the established media companies do with Sora, but what individual creatives or small independent teams do with it. There’s been a lot of talk about how AI will democratise filmmaking, but until recently that may have felt like wishful thinking. Now with Sora we can see with our own eyes just how powerful this new technology is, and it’s only going to improve from here on.

Screenshot 2024-03-05 at 12.49.53

Jake Strong Jones

Jake Strong Jones, social operations lead, LADbible Group

OpenAI’s Sora video generation is the starting gun for a paradigm shift in content creation. With AI video generators such as Sora, we now have access to an unlimited library of content which can be tailored to our exact wants and needs. 

This will speed up content creation.

We create a massive volume of content for a range of social media platforms and often with quick turnaround. Sora is another tool to simplify our content creation journey between platform deliverables, working under our producers’ and editors’ direction.

Some people are understandably concerned about the future of content under the presence of AI generated video as they see human input being minimised in creating engaging content, but I don’t believe that to be true.

The “human touch” will be as important as ever in our new AI world because if used effectively, it is just another tool in our arsenal to unlock our creative potential.

Asa Bailey

Asa Bailey, founder, BaileyAI

What does Sora mean for those working in content creation? You better get ready for the world’s first AI TV channel or streaming platform.

Imagine launching a music video channel this year, imagine a channel of endless AI generated music videos.

How do we (human workers) fit into this idea, what jobs will we need to be doing to deliver these endless streams of potentially personalised video content?

Think about this, and you will start to answer the question: What will those in TV and media production do in a media landscape dominated by generative AI?

For me, I’ve found myself supervising the shooting of human performance data that we then feed into the AI to guide nuanced model outputs.

Screenshot 2024-03-05 at 12.50.51

Paul Storm

Paul Ingvarsson, co-owner / finishing editor, Storm

We are actively working on two AI-driven projects – one is already commissioned and the other we are working on the proposal proof of concept for a client.

The commissioned piece may well include the creative use of AI voice generation of a well-known person to help fill the gaps on poor quality archive recording.

We could then possibly utilise video lip syncing techniques using Sora or alternative platforms.

The ‘in pitch’ project is about to be released to channel commissioners for consideration and is based on AI voice and video syncing.

P Doyle

Paul Doyle, head of editorial video, Immediate Media

For me, the impact of Sora highlights the need for continuous learning and education. It’s not just about staying relevant, it’s about leading the charge in an ever-evolving landscape. The transition from traditional to AI-driven content creation demands a new kind of literacy – understanding AI language and mastering the art of the prompt.

It also reduced the level of knowledge, degrees of skill and experience, to be able to capture a scene. Your command of a crew, mastery of a timeline or power to produce is increasingly less valuable when a ‘text’ request is all that is required. This is true of all the tools that AI, and Gen-AI are providing that run the gambit of the production cycle.

Future-proofing our careers means diving deep into AI’s capabilities, limitations, and potential, transforming ourselves from spectators to orchestrators. The goal, as I see it, is to cultivate an industry workforce that is not only adept at using AI tools like Sora but is also visionary in leveraging them to craft compelling, cutting-edge content.

Screenshot 2024-03-12 at 15.37.58

Simon Wilkinson, head of VFX, 1185 Films

Even though I haven’t used Sora myself, I’ve been keeping a close eye on what it can do. It’s a little frightening to think what it will be able to achieve in a couple of years’ time.

I worked on an Unreal Engine test shoot a couple of weeks ago, for a beer commercial. The scene was a Canadian-style forest in autumn and all of the backgrounds were created in Unreal. But the director needed an establishing shot which he created in a matter of seconds in Sora, matching the background scene exactly.

So, as far as practical uses are concerned, it seems that Sora is a much better solution to trawling around stock footage sites and never finding exactly what you want, and much quicker too.

One last thing….

What is your opinion on Sora and, more generally, the rapid growth in the power and potential of GenAI-driven innovations?

Is it the beginning of the end, or (to quote Lou Reed), the beginning of a great adventure?

I’d like to hear from you. Especially if you’re female!

Please email your thoughts to me, and I’ll add them to the article above.