Rickard Lönneborg, CEO at Codemill on what developments need to happen in AI and ML to maximise their benefits for media workflows

Rise_of_the_machine_1280x720px

ML and AI are becoming increasingly used throughout the media chain, from auto tagging metadata, creating transcripts, finding shot changes, conducting quality control, and flagging inappropriate content, to improving content recommendations on an OTT service.

New developments are happening all the time. The one I’m currently most excited about is the ability to determine scene changes, making it possible for AI to find suitable places for ad breaks or binge markers.

Despite the wealth of AI tools available however, very few are media-specific.

Given the fact media workflows are unique and involve a lot of very manual long-winded processes, this is surprising and limits what is possible with many of the off-the-shelf tools.

The other main challenge for AI in media workflows right now is its limitations. While object recognition is good at deciding there is a car in the frame, it is unlikely any existing solutions can tell you it is an Audi R7, for example.

If you are a niche channel serving only programming about cars, that would be crucial for integrating AI into your workflow.

Meanwhile, transcription technology has also improved massively but can still struggle with strong regional accents or very specific terminology.

Almost every process, from capture through to distribution, has manual elements that are labour-intensive but often repeatable. This means there is massive potential for AI to make that more efficient, freeing up time for people to focus on being creative.

With the right tools and processes in place, the entire media workflow could look a lot more automated. At capture, that could range from using AI to add effects or enhance the video as it is created to automated cameras that require no manual operator. And there have already been developments to move towards AI-powered editing tools that can scan through footage to identify highlights and other key moments.

AI is already being used to manage media better, from auto tagging to QC, but as AI tools improve that could get even more sophisticated, helping to minimise the human intervention needed. At distribution, AI can automatically transcode content to the right format depending on the platform. It is already being used for content recommendations; how can we make it even more accurate at predicting the type of content the viewer is likely to enjoy?

How do we get there?

If we are to fully maximise the potential of AI for media workflows, we need more media-specific AI tools, to get better at training the algorithms before deployment, and on-the-fly AI training once live.

Media workflows are unique in lots of ways so an AI built for something other than a media workflow is not likely to have the desired effect, or it will need a lot of training to get there.

No two media content providers have exactly the same workflow, but there are often some synergies in terms of processes and the technology being used.

With AI tools built specifically for media, such as those from Amazon Web Services, we can reduce the amount of custom training needed. Before AI can reach its full potential, we need media-specific tools for every part of that, from capture through to distribution.

And even with media-specific tools, custom training is needed to ensure they work for that specific media company. In the case of a niche video service provider, custom training can make the AI recognise the different car models, for example.

The AI tool will need data, and lots of it. By feeding in video content that has already been manually tagged, you can teach the system to recognise and tag similar content.

This needs to be done for all the processes you need the AI to perform and of course the results need to be manually checked to ensure the AI is classifying things correctly.

Once live, it is important to continually review those processes and where the AI is going wrong or even just not saving as much time as it could. Once those areas are identified, you can train the AI on-the-fly for those bespoke cases. Being able to do this effectively means you need a good UI connected to your system to make that a seamless process.

While some media workflows and requirements could be unique to one media company, there is often much more crossover. The more tools developed and custom training deployed to solve a hurdle for one company, the more that could become available for others with similar challenges. Perhaps the future lies in more collaboration between providers to determine what some of that looks like?

Rickard

 Rickard Lönneborg is CEO at Codemill.