Clear, balanced regulations on using IP to train AI are vital for the screen sector
Long before the advent of AI, there was tension between intellectual property rightsholders – whether they be artists, production companies, record labels or distributors – and emerging technologies. This struggle is not new, and often new tech is presented as beneficial for the consumer at the expense of the rightsholder. Take the rise of streaming platforms, for example, which has revolutionised the way consumers engage with TV and film, but in doing so has threatened the traditional revenue models that rightsholders in the sector used to benefit from.
Now that generative AI models are widely accessible, there is a further threat to rightsholders’ intellectual property. AI technology has the potential to revolutionise the screen sector, as noted by the BFI in its recent report – so how can we navigate the legal challenges it poses, and ensure a fair exploitation of IP rights?
A new threat
In principle, rightsholders should have control over the exploitation of their IP. In reality, however, it is difficult to fully control how content is used online. The sheer size of the so-called datasets that generative AI large language models are being trained on increases the likelihood that a given model may have ingested copyright-protected works without obtaining the appropriate permissions. This risk is especially heightened if those works are available on the internet without their owners’ permission in the first place.
The practice of scraping the internet for data to train large language models has raised questions about how this can be done fairly and legally. The landscape is complex and nuanced, and the UK government is attempting to grapple with it and draw lines in the sand about what is, and is not, permitted under UK law. The backlash to suggested reforms has so far been widespread and vocal from rightsholders in a variety of sectors.
Differing methods
There have been disagreements over whether the government should intervene at all on this issue. Some creatives argue that non-intervention by the government would have been the best way forward, on the basis that a licensing framework would likely have evolved naturally over time, while frontier lawsuits worked their ways through the courts and gradually imposed legal guardrails.
Similar practices have developed in other sectors, and with other technologies, but this “do-nothing” approach by the current government would have left them open to potential criticism - especially since they have positioned themselves as both pro-creative and pro-tech.
By contrast, by legislating proactively, the government can demonstrate its active engagement with the issue, and in the process try to position the UK as a market-leader in the generative AI sector, even if it ultimately chooses to lean more towards protecting rights than allowing exceptions to copyright.
Persisting confusion
Unfortunately, the government’s proposition of a potential exception to the existing copyright rules, and the launch of a lengthy consultation and review process, has effectively stalled momentum in some parts of the creative sectors. At present, both sides are waiting to see exactly how, if at all, the government intends to reform the existing legal framework.
In the meantime, however, this continuation of the status quo should hopefully lead to some more licensing deals emerging, in addition to the handful of high-profile deals that have already been announced, and will, in all likelihood, mean a continuation by tech companies to mine vast datasets which may well include unlicensed content.
From page to screen?
Some parts of the creative sector are adapting more quickly than others in a bid to find solutions while we wait for the government to make its decision. The Copyright Licensing Agency (CLA) is developing various templates in the hopes of setting a standard in the journalism industry, including a template generative AI training licence which the CLA claims will be an “innovative, collective licensing solution” that will help to ensure rightsholders get paid if their works are used to train AI models.
Due to be published by the autumn (according to the CLA website), this template could provide useful, accessible insight into how to structure licences of this nature. This could in turn become a tool that screen sector bodies and unions use for inspiration when seeking to protect their own rightsholders.
Ultimately, it is a solution along these lines that seems most suitable in the absence of a clear legal position. Without clear legislation or case law to refer to, creatives and tech companies both remain in an uncertain position. In the meantime, a licensing model - potentially along the CLA’s lines - could help to restore some balance between the two sides.
- Henry Birkbeck is a counsel at Reed Smith
No comments yet