Dave Colantuoni at Avid provides a series of practical examples of where artificial intelligence can aid the work of video editors

MC-B821-0045

Certain phrases and buzzwords swirl around every major trade show, used by brands to engage with visitors and associate themselves with relevant topics. They end up being everyone’s talking points and in every exhibitors’ marketing efforts. A number of years ago, artificial intelligence and machine learning became prominent, both of which were promised as tools that would revolutionise how media is produced, predominantly in post-production workflows.

Understanding that revolution doesn’t happen overnight is also important. Luckily, we have seen the industry adopting more AI-enhanced tools, adding them into existing workflows or integrating them into new products. These are buzzwords that side-step the hyperbole and actually have a tangible impact on users.

For post-production in particular, AI can be used right now to help overcome many of the challenges that editors have traditionally faced.

Faster than a speeding search function

Being able to process and ingest assets as quickly and as efficiently as possible, and then having fast access to them through search and find functionality, is what many technology providers are looking to deliver to editors. By integrating AI-enhanced ‘smart’ features into editing or asset management software, those users can see a number of benefits – including phonetic search and indexing as well as intelligent in-scene identification.

Content indexing is a somewhat unavoidable part of the post-production process and requires a certain amount of data wrangling. But AI can remove the time-consuming manual aspect of this and speed up and simplify the process by automatically analysing all the clips being imported into a project or in any sized media library.

The AI can phonetically index all the assets’ audible dialogue so editors can easily search and find content, quickly returning results based on a word or phrase. Users can quickly navigate to the exact spot they need in the clip, even review or scrub through several takes at once and add relevant clips to any sequence.

MC-B821-1081

Game recognise game

One of the mainstays of AI is the ability to use machine learning to teach software to recognise faces. In the world of media production, this is an invaluable tool for any editor, whether you’re working in film, TV, commercials, broadcast news or live sports.

This recognition can be used to identify actors, presenters or players and add information to metadata that’s easily searchable when needing to find scenes or clips featuring that person.

This can also be extended to entire scenes, automatically adding metadata about identifying features of a setting, or its geographic location. This can then be easily searched when coming back to a project or needing to revisit a scene.

Speech-to-text functionality is another huge benefit in post-production workflows. By automating the enhancement of metadata for media libraries, not only can search be significantly improved, but AI and neural networks can also be used to verify subtitles and closed captioning – something which is a requirement for programming services.

This is especially significant for live operations like breaking news, cultural affairs, and sports reports. Being able to turn around edits as quickly as possible whilst following all accessibility guidelines is more important than it’s ever been as audiences have become used to the immediacy of the internet.

More creativity, please

These are all tools that quickly become completely invaluable for editors as soon as they see the benefits. It goes without saying that delivering more effective and less time-consuming workflow functionality is incredibly valuable. But having that without the need to spendytime and resources up front manually indexing consistent and accurate metadata as content is ingested brings about the biggest benefit of integrating AI into post-production workflows – time.

Time not spent indexing shots and tagging actors in every scene in which they appear is time that editors can dedicate to working with their scenes and projects, delivering a better, more creative edit.

Being able to spend their time focusing on editing rather than on manual data admin is the most valuable proof point for integrating AI into post-production workflows.

What’s in store?

Editing and asset management software is designed with the user in mind and, as technology continues to evolve and progress, so will these solutions. We’ll no doubt see more artificial intelligence tools being integrated across the production workflow, but likely none more impactful on the user experience than those in the post-production process.

We don’t expect users to employ completely new ways of working just because artificial intelligence can be made available. Instead, we’ll see developments continue in the way they have for the last few years. Existing functionality will be improved or automated by AI to create more efficient versions of the workflows we already use every day.

For example, we can see how machine learning can be used to identify and track objects on screen which in turn could enable the automation of a number of other processes. This might be adding blur, rotoscoping shots or reformatting assets. This again won’t replace personnel but will instead provide even faster and more effective workflows that give editors the ability to do what they do best – edit.

Dave Colantuoni of Avid headshot

 Dave Colantuoni is VP product management at Avid.