Joe Lewis of The Voiceover Gallery explains his concerns about the use of AI-produced voiceovers
The backlash to ScotRail’s AI-produced Scottish voice, particularly from the voiceover artist claiming the company are using her voice, demonstrates the importance of ethical consent frameworks and the challenges this use of AI can cause for voiceover actors.
This is where AI gets murky and, for the record, not something we would consider ethical. I would like to think that whoever represents the voiceover artist in this case, would have at least flagged the fact that she was signing over the right for the company to synthesise her voice in the contract.
Ideally, agencies and companies would always act completely ethically when using the content of voice actors or any other artists. However, it is naive to assume that this is always the case. I know that at The Voiceover Gallery, we would raise our concerns with the voice actor if we saw any clause to sign away their voice in perpetuity with the option to clone or synthesise it.
However, without meaning to state the obvious, it is also the responsibility of the actor to read their contract thoroughly before signing anything.
In this case, without knowing the full details of the agreement, it is impossible to ascertain how unethical the initial deal was. The reality of this situation is that we are not privy to the full contract, nor are we aware of how generous the initial financial agreement was. Nonetheless, I am deeply sympathetic to the voiceover artist’s frustration, and it does feel wrong.
This case also raises further questions, as it does seem that the artist’s voice wasn’t used for its intended purpose.
There are scenarios where the voice may have been used for different content and an individual may have ethical objections. It is not unreasonable to assume that many artists and content creators may take issue with their voice being used to promote certain products, such as weapons, fossil fuels or tobacco products, to name a few.
Music industry contracts often contain ‘ethical waivers’ whereby writers are waiving the right to refuse the use of their voice on certain content if they feel they have a moral objection to their voice being associated with the brand. It is an ‘opt-out’ approach, but at least there is some legal mechanism in place.
The reality of the use of generative AI in the marketplace is a legal minefield. When it comes to the use of artistic content, the media sector is putting particular pressure on the government to implement a protectionist approach to legislation. This is particularly apparent when it comes to its use when training data sets.
The most visible campaign was in February, when almost every UK national newspaper used their front pages to promote a campaign against government proposals to create a copyright exemption for AI companies. On this front, there is yet to be any meaningful progress.
There are also other pitfalls in using AI-generated voices for many projects. Place names are a nightmare, one only has to look at a map of Wales to see how easily pronunciations can get confused.
We know from experience that, when dealing with projects requiring the pronunciation of numerous place names, it’s best to avoid synthesised voices.
Although it can technically be done, it takes longer than having someone in the studio reading them correctly.
I think that using AI in general is a trade-off. Yes, you can create them more cheaply, but you will always sacrifice a level of quality.
I certainly do not believe that it is a good idea for train companies to be using synthetic voices to pronounce their place names and justify it as a work in progress.
Besides the obvious lack of quality, there are also accessibility issues to consider – those with visual impairments, for example, may require clear pronunciations to understand where they are.
Joe Lewis (pictured above) is head of audio at The Voiceover Gallery
No comments yet