
Image: Shutterstock (modified)
On March 18 the UK’s Labour government finally confirmed what everyone has known for months, that its preferred policy option of allowing AI developers to freely use copyrighted content to train their AI algorithms unless rightsholders specifically opt out, was a shambles. The government had been backpedalling for a while but the issuance of the Parliamentary-mandated Report on Copyright and Artificial Intelligence, all 125 pages of it, finally drew a line under all the prevarications. The paper was issued by the two Departments that represent the opposite ends of the spectrum when it comes to the copyright/AI question, the Department for Science, Innovation and Technology and the Department for Culture, Media and Sport. The inconclusive results reflect this duality.
The government’s initial Consultation Paper had proposed four options to deal with the issue of unauthorized use of copyrighted content to train AI platforms;
- Do nothing (status quo). Copyright and related laws would remain as they are. It would be left to the courts to settle disputes between rightsholders and unauthorized users;
- Strengthen copyright by requiring licensing in all cases of use of copyrighted content;
- Legislate a broad text and data mining (TDM) exception to copyright law. (Britain already has a TDM exception but it is limited to non-commercial research purposes);
- Create a data mining exception with opt-out and transparency measures. Rightsholders would be required to notify when their works were not to be accessed, and AI developers would be required to disclose what works they had used.
The Paper indicated that the last option, Option 3, was the government’s preferred choice. Unfortunately–for the government–fully 97% of the 11,520 respondents to the consultation survey disagreed. To put it another way, only 3% of respondents endorsed the government’s preferred option. (Note to UK government: Wipe egg off face). Rightsholders did not want the onus to be on them to opt-out (given that copyright law is based on the user, subject to fair dealing and other statutory exceptions, needing to obtain permission in advance from a rightsholder), and because opting out is not necessarily easy to do. AI developers objected to the transparency provisions. They preferred Option 2, a broad TDM exception allowing them to help themselves to whatever content they deemed useful for AI training without let or hindrance. The last thing the tech industry wants is to be required to document what content it has appropriated without authorization. To oppose Option 3, rightsholders mounted a widespread and very effective campaign to remind legislators of the value of Britain’s creative industries, both economically and culturally. Everyone from those two noble knights, Sir Elton John and Sir Paul McCartney, to Ed Sheeran, Dua Lipa, Kazuo Ishiguro, Andrew Lloyd Webber, Cat Stevens, Sting, Tom Stoppard, and on and on—maybe everyone who is anyone in the cultural world in Blighty, except Banksy, spoke out. Now the government has thrown up its hands and admitted it got it all wrong. The blatant content giveaway cum confiscation is not going to happen, at least not in the form initially proposed.
The Starmer government’s particular slice of humble pie was worded as follows;
We must take the time needed to get this right. We will not introduce reforms to copyright law until we are confident that they will meet our objectives for the economy and UK citizens. This means protecting the UK’s position as a creative powerhouse, while unlocking the extraordinary potential of AI to grow the economy and improve lives. Any reform must ensure that right holders can be fairly rewarded for the economic value their work creates, and that they are protected against unlawful and unfair use of their work. It must also ensure that AI developers can access high quality content. It is clear through the consultation and our subsequent engagement that there is no consensus on how these objectives should be achieved.
Truer words were never spoken. And there will never be consensus as long as one of the parties is an entitled tech industry that feels it can freely confiscate valuable content produced and owned by others, and which has no qualms about accessing pirate websites to do so. Legal consequences are required to bring it to the negotiating table.
Although we now know the UK government will not forge ahead to adopt the Option 3 opt-out proposal, we still really don’t know what it will do. Amidst all the relief from the creative community that Option 3 has been dropped as the chosen way forward, there is still a sense of unease about what might happen instead. Composer Ed Newton-Rex, one of the leading activists amongst the creators, has voiced his concerns on X. While the government has withdrawn its preference for Option 3, it is still on the table as is the possibility of some other form of TDM exception. He is worried that while there is talk of compensation for rightsholders, permission does not seem to be part of the equation. With the dropping (for now at least) of Option 3, transparency requirements for the tech industry are also once again in doubt.
What will happens next on this issue in the UK? More study, more research, more monitoring of developments. In other words, back to Square One. The one new development coming out of the government’s Report is an acknowledgement that something needs to be done with respect to digital replicas. As described in the Report, this involves the use of AI to replicate or mimic the appearance or voice of individuals. Current copyright is not well adapted to protect against such misuse. The result could be the introduction of a right of personality in the UK.
When it comes to the intersection of copyright and AI development, the UK government is trying to resolve an issue with which many countries are grappling. The question is how to get a slice of the AI investment pie by incentivizing the tech industry without throwing the creative community under the bus. In the US, finding the balance has largely been driven by the courts. The White House has just issued its National Policy Statement on Artificial Intelligence. Among its many policy positions, the Statement indicates this question should continue to be left to the courts to decide, even though the White House believes that unauthorized and uncompensated use of copyrighted material for AI development is fair use under US law. In Australia the government looked at the same issue and in the face of vocal and organized opposition from the creative sector, explicitly ruled out introducing a TDM exception. India tried to find the balance by proposing the establishment of an unworkable compulsory licence regime, a scheme that I described in a recent blog post as the “worst of all worlds”. It was as widely condemned by both creators and tech developers as the UK’s Option 3. Canada, which like Australia does not have a TDM exception in its copyright law but which is being pushed by the tech industry to loosen restrictions in the face of current and anticipated lawsuits, is facing similar questions.
While this may seem to some (like the UK government for example) as an almost intractable problem that will require much more work, study and consultation to resolve, in fact it is not all that difficult. Cultural industries generally are not opposed to the development of AI. In fact, many creators use it to assist their work. But they want to have some say over whether or how their work is being used and want to receive compensation when it is used. It is true that some individuals may wish to have nothing to do with AI and would like to see it go away. That, however, is not going to happen although their right to have their work not used to develop applications that may unfairly compete with the content they produce must be respected. The best outcome is a win/win scenario where creators voluntarily opt in to the AI development process and share in its benefits. This is already happening on an increasingly large scale with respect to commercial licensing deals, with news media outlets, studios, music publishers and book publishers all signing licensing agreements with AI developers. Left out at this stage are most individual authors and artists and small publishers, but voluntary collective licensing can fill the gap. As a recent example, at the recent London Book Fair in March of this year, the first stage of an opt-in collective licensing initiative was launched by Publishers Licensing Service to supplement direct agreements between publishers and AI companies.
The “magic formula”–the win/win solution–which so many countries are finding so hard to unlock is based on three principles, well articulated by Canada’s Coalition for the Diversity of Cultural Expression (CDCE);
1. Authorization (Permission)
2. Licensing (Remuneration)
3. Transparency (Disclosure)
Acceptance of these three cardinal principles by AI developers would cut through what the UK government seems to see as a Gordian Knot. The solution is not all that difficult, but it requires courage to stand up the tech industry’s threat of taking their ball and going elsewhere to play. Let’s hope that as the UK and other governments go back to the drawing board, they use these principles as guideposts to arrive at a solution that, at the end of the day, best serves everyone.
© Hugh Stephens, 2026. All Rights Reserved









