Jesse Shemen at Papercup discusses what AI companies can do to ensure AI and human creativity can coexist
AI has a host of advantages. From automating problem solving to advancing innovation in education and healthcare. But it has undoubtedly sparked widespread concern across industries and disciplines, most notably in the entertainment sector.
Last year’s Hollywood strikes are a clear indication of this. Anxiety over AI protections, including consent requirements for actors, the emergence of synthetic performers and the potential for job displacement, are just some of the ethical dilemmas surrounding the rapidly growing AI industry.
For example, SAG-AFTRA recently announced an agreement which permits their voice actors to work with an AI company to create digital replicas of their voices, which can ultimately be licensed for use in video games and other interactive media projects. The move has triggered backlash, with some members voicing concerns over the future of their profession.
Broader fears around the technology also persist among industry players and the general public alike. And these fears are not without merit. There are AI voice generation methods that certainly play in complex ethical fields. The use of deepfakes that can replicate the exact features of a voice, but falsify the words being used by the person appearing on screen, is a key example of this.
Despite the boom in the use of AI and the surge in the tech being used for deceitful purposes, there appears to be no unanimous consensus on how to successfully tackle these issues.
It’s true that the UK Government has made a start, as shown by the recent AI Summit. Bringing together international leaders, AI companies and research experts, it provided a platform to assess the risks posed by the technology, the solutions that could be implemented and the standards that could be developed to support governance.
Conversations around safety are only set to intensify. However, there is a pressing need to turn discussion into action and AI companies have a collective responsibility to play a leading role.
While AI start-ups and SMEs unquestionably have limited resources in comparison to larger businesses, there are still measures they can put in place to help ease concerns while ensuring they are able to grow and excel.
Companies such as Papercup are a prime example of this. We are working with international organisations like Bloomberg, Fremantle and Insider, using our AI tools to localise content for our customers and effectively helping them to reach new audiences that are keen to consume videos in their native language.
For Papercup, operating responsibly is key. We hand select the companies we work with and essentially provide a gated service to ensure that our technology is used for content that aligns with our values and standards. We commission voice actors from around the world to help with developing and enhancing our tools and have also recently launched a pledge.
In a move that is the first of its kind in the UK – Papercup not only promises transparent, consensual and ethical use of voice actors’ data, but we also formalise our commitment to paying the talent that we work with fairly. And this approach hasn’t hindered our success – we have already helped over 1 billion people in non-English speaking territories consume news and entertainment content.
AI has an important role to play in driving forward progress across various industries. When it comes to the media and entertainment sector in particular, companies are not only showing how the tech can improve access to content but that there are absolutely ways that AI and human creativity can coexist.
AI can ultimately be a force for good, but it is the responsibility of the leaders in this space to promote and commit to ethical use of the tech.
Jesse Shemen is CEO at Papercup