The rapid adoption of AI technologies, particularly Generative AI, has sparked global debate around AI’s impact on the creative sector. The benefits of the technology need to be balanced with the implications for intellectual property (IP) rights holders, both in the input and output of these AI tools. Copyright holders have raised concerns about potential unauthorised use of their work for training of AI models and the potential risk that the tools produce infringing copies, affecting both income from copyrighted works and investment in the creative industries. AI developers want to ensure high quality outputs and commercial success and require training datasets.
In light of these issues, the UK is consulting on its approach to mitigating the risks that AI development and output pose to copyright holders and clarifying ownership of IP in AI outputs.
The UK is considering a variety of approaches, including whether to bring in text and data mining permissions, labelling AI generated content, training dataset transparency and new rights for individuals replicated in AI-generated products. Many of these issues are also under considerations or in force in other jurisdictions. For instance the EU has text and data mining permissions already and the new AI Act has provisions on content labelling.
The consultation runs until 25 February 2025.
We provide a high-level summary of the UK government consultation’s key proposals, separated into proposals related to AI inputs and AI outputs, and next steps to consider below:
Summary of key proposals (non-exhaustive)
AI training inputs
Wider clarification of copyright law
The consultation recognises the importance of ensuring that existing ambiguity in the law is removed and ensuring that the UK’s copyright laws are internationally interoperable.
Data mining exception with rights reservation
In line with the EU, the consultation proposes implementing a new provision which would permit text and data mining of copyrighted works for AI development (including for commercial purposes) provided that AI developers have lawful access. Such exception would encompass works available online or under contractual agreements (for example via a subscription) but would not apply where a rights holder has explicitly reserved their right to prevent use of their works for training AI.
Technical standards
The consultation welcomes the development of technical solutions to restrict the use of copyrighted works AI training where the rights holder has enforced a rights reservation. However, it notes that a lack of standardisation in approaches is challenging for rights holders to navigate and that standardisation would enable rights holders to signal their preferences regarding the use of copyrighted works and AI developers to respect such preferences.
Contracts and licensing
The consultation underscores the importance of fair and transparent licensing agreements for AI training data, particularly for individual creators. It proposes collective licensing and data aggregation services as potential solutions for streamlining the licensing process to ensure that rights holders can seek fair remuneration for their work, and that AI developers have viable means to access datasets.
Transparency
The consultation proposes mandating transparency from AI developers regarding their training data, which may include disclosing the use of specific works and datasets, details of web crawlers (ownership, purpose, etc.), and potentially requiring records and compliance demonstrations. This would inform rights holders about their work's usage but may pose practical challenges, especially for smaller firms, and those concerned about the impact on commercial confidentiality.
Encouraging research and innovation
The consultation seeks information about how to tailor requirements based on company size and specific application of AI technologies, aiming to ensure that any new regulations are proportionate and do not stifle AI research and innovation.
AI outputs
Computer-generated works
Currently, the Copyright Designs and Patents Act 1988, provides copyright protection to certain types of works entirely generated by a computer. The consultation seeks input on the legal clarity, effectiveness, impact, and moral argument of this protection in the age of advanced AI, seeking views on whether to retain, clarify, or potentially remove it.
Infringement and liability
The consultation reiterates that content generated by an AI model will infringe copyright law if it reproduces a “substantial part” of a protected work. The liability of both AI developers and users in such cases is also considered. The consultation seeks views on whether there are specific areas where copyright law may be deficient or where they may be barriers to enforcement.
AI output labelling
The consultation proposes regulating and mandating consistent labelling of AI-generated content to better inform users of how content was created.
Digital replicas (deepfakes)
The government recognises concerns about AI being used to create realistic but fake images, videos, or audio recordings of people ("deepfakes"). The consultation highlights a complex legal landscape, the need to balance free speech with protecting individual rights, and seeks input on whether current laws are enough to deal with this.
Next steps
The consultation is open until 25 February 2025. Following the consultation, the government will assess the responses and further develop its policy, and if necessary, legislative proposals.
Your contacts
If you would like to speak to the Deloitte team please contact:
Joey Conway, Internet Regulation Partner, Legal Lead
Nick Seeber, Global Internet Regulation Lead Partner, Deloitte
Scott Bailey, Internet Regulation Director, AI risk and regulation lead
Valeria Gallo, Innovation Policy Lead, Deloitte
Nia Thomas, Associate, Deloitte Legal