Artificial Intelligence and Copyright: The Canadian Cultural Community Speaks Out

Image: http://www.shutterstock.com

The extended period set by the Canadian Government (through Innovation, Science and Economic Development Canada, ISED) for response to its consultation paper on Artificial Intelligence (AI) and Copyright closed on January 15. We will start to see a flurry of submissions released by participants while ISED digests and assesses the input it has received. One of the first is the submission from the Coalition for the Diversity of Cultural Expression (CDCE), which represents over 360,000 creators and nearly 3,000 cultural businesses in both French and English-speaking parts of Canada. CDCE’s membership includes organizations representing authors, film producers, actors, musicians, publishers, songwriters, screenwriters, artists, directors, poets, music publishers—just about every profession you can think of that depends on creativity, and protection for creative output. The CDCE submission highlights three key recommendations, summarized as follows;

  • No weakening of copyright protection for works currently protected (i.e. no exception for text and data mining to use copyrighted works without authorization to train AI systems)
  • Copyright must continue to protect only works created by humans (AI generated works should not qualify)
  • AI developers should be required to be transparent and disclose what works have been ingested as part of the training process (transparency and disclosure).

While none of these recommendations are surprising, and from my perspective are eminently reasonable, I am sure we will also see a number of submissions arguing that, “in the interests of innovation”, access to copyrighted works is not only essential but should be freely available without permission or payment. OpenAI, the motive force behind ChatGPT—and the defendant in the most recent high-profile copyright infringement case involving AI (When Giants Wrestle, the Earth Moves (NYT v OpenAI/Microsoft)—has already staked out part of this position. In its brief to the UK House of Lords Select Committee looking into Large Language Models (LLMs), a key technology that drives AI development, the company says;

“Because copyright today covers virtually every sort of human expression–including blog posts, photographs, forum posts, scraps of software code, and government documents–it would be impossible to train today’s leading AI models without using copyrighted materials (emphasis added). Limiting training data to public domain books and drawings created more than a century ago might yield an interesting experiment, but would not provide AI systems that meet the needs of today’s citizens.”

OpenAI claims that it respects content creators and owners and looks forward to continuing to work with them, citing among other things, the licensing agreement for content it has signed with the Associated Press. But failure to reach a licensing deal with the New York Times is really the crux of the lawsuit that the media giant has brought against OpenAI and its key investor Microsoft. If reports are true that OpenAI’s licensing deals top out at $5 million annually, it is not surprising that licensing negotiations between the Times and OpenAI broke down over such lowball offerings.

As for the CDCE submission to ISED, it recommends that the government refrain from creating any new exceptions for text and data mining (TDM) since this would interfere with the ability of users and rightsholders to set the boundaries of the emerging market in licensing. No copyright exemption for AI is what the British government has just confirmed, after playing footsie with the concept for over a year. Apart from the examples of the licensing deals that OpenAI has with the Associated Press and German multimedia giant Axel Springer, the CDCE paper notes a range of other recent examples of content owners offering access to their product through licensing arrangements, including Getty Images, Universal Music Group and educational and scientific publishers like Elsevier. The paper also urges the government to avoid interfering in the market when it comes to setting appropriate compensation, leaving it to market players or, where the players can’t reach agreement, to the quasi-judicial Copyright Board.

In my view, licensing is going to be the solution that will eventually level the playing field, but to get there it will require that major content players lockout the AI web-crawlers while pursuing legal redress, as the NYT is doing. This will help to open the licensing path to smaller players and individual creators who don’t have the resources available to employ either technical or legal remedies. (The issue of what has already been ingested without authorization still needs to be settled). As for the tech industry’s suggestion that creators can opt-out of content ingestion if they wish, CDCE rightly points out that this is standing the world on its head, and would be contrary to longstanding copyright practice. Not only is it impractical in a world where what goes into an AI model is a black box (thus the imperative for transparency) but it is like saying a homeowner has to request not to be burgled, or else can expect to become a target.

On the question of whether AI generated works should be granted copyright protection, CDCE points out the double-standard of proposing an exception to copyright for TDM for inputs while claiming copyright protection for AI generated outputs. The need for human creativity is a line that has been firmly held by the US Copyright Office, pushing back on various attempts to register AI-generated (as opposed to AI-assisted) works. Canada has not been quite so clear cut in its position, owing to the way in which copyright is registered (almost by default, without examination) in Canada, as I pointed out in this blog post (A Tale of Two Copyrights). While AI generated works have received copyright protection in Canada (Canadian Copyright Registration for my 100 Percent AI-Generated Work), this is more by oversight than design, given the way the Canadian copyright registration system works.

Thirdly, we turn to transparency, a sine qua non if licensing solutions are to be implemented.  If authors don’t know whether their works are being used to train AI algorithms, or can’t easily prove it, licensing will fall flat. CDCE calls for publication of all content ingested into training models, disclosure of any content outputs that contain AI, and design of AI models to prevent generation of illegal or infringing content. This is similar to requirements already under consideration in the EU.

CDCE also makes the important point that it is not just copyright legislation that defends individual and collective rights against the incursions of AI and big AI platforms. While the Copyright Act offers some protection to creators, privacy legislation is important for all citizens. As the UK Information Commissioner has pointed out in a recent report, the legal basis for web-scraping is dependent on (a) not breaching any laws, such as intellectual property or contract laws and (b) conformity with UK privacy laws (the GDPR, or General Data Protection Regulation), where the privacy rights of the individual may override the interests of AI developers, even if data scraping meets other legitimate interest tests.

Finally, there is the question of the moral rights of creators that can be threatened by misapplication of AI, whether it is infringement of a performer’s personality or publicity right, distortion of their performance or creative output, misuse of their works for commercial or political reasons or any of the other reasons why copyright gives the creator the right to authorize use of their work.

Quite apart from the question of AI, there are of course other outstanding copyright questions that need to be resolved urgently, including the longstanding issue of the ill-conceived education “fair dealing” exception that has undermined if not permanently damaged the educational publishing industry in Canada. This exception needs to be narrowed to allow users continued unlicensed access to copyrighted materials under fair dealing guidelines for study, research and educational purposes but to limit institutional use to situations only where a work is not commercially available under a license from a rightsholder or collective society. While this issue requires looking back and fixing something that is already broken, policy making with respect to AI and copyright needs to anticipate the future and “do no harm”, while requiring AI developers to open up their black boxes and respect existing rights. This should be achieved by maintaining and protecting the rights of creators in ways that will facilitate market-based licensing solutions for use of copyrighted content by AI developers, while ensuring that creative output remains the domain of human beings, and not machines.

© Hugh Stephens, 2024.

When Giants Wrestle, the Earth Moves (NYT v OpenAI/Microsoft)

Image:www.shutterstock.com

There is no better way to start out the New Year, 2024, with a commentary on Artificial Intelligence (AI) and copyright. It was the big emerging issue in 2023 and is going to be even bigger in 2024. The unlicensed and unauthorized reproduction of copyright-protected material to train AI “machines”, in the process often producing content that directly competes in the market with the original material, is the Achilles heel of AI development. To date, no one knows if it is legal to do so, in the US or elsewhere, as the issue is still before the courts. The cases brought to date by artists, writers and image content purveyors like Getty Images have not always been the strongest or best thought out. In one instance, the plaintiffs had not even registered the copyright on some of the works for which they were claiming infringement, a fatal flaw in the US where registration is a sine qua non in order to bring an infringement case. That may have been the most egregious example of a rookie error but in general the artists’ and writers’ cases have not gone too well so far, although the process continues. Some cases are on stronger grounds than others. Here is a good summary. The Getty Images case will be an interesting one to watch. And now the New York Times has weighed in with a billion-dollar suit against Open AI, and Microsoft. The big guys are now at the table and the sleeves are rolled up. The giants are wrestling.

What is at issue could be nothing less than the survival of the news media and the ability of individual creators to protect and monetize their work. It could also open a pathway to legitimacy for the burgeoning AI industry. The ultimate solution is surely not to put a halt to AI development, nor to put content creators out of business. It is to find a modus vivendi between the needs of AI developers to ingest content in order to train algorithms that will “create” (sort of) content–assembled from vast swathes of input–and the rights of content creators. While training sets are generally very large, some of the input can be very creator-specific and the output very creator-competitive. This is where the New York Times comes in.

The Times, like any enterprise, needs to be paid for the content it creates in order to stay in business and create yet more content. If its expensively acquired “product”, whether news, lifestyle, cooking, book reviews or any of the other content that Times’ readers crave and are willing to pay for, can be obtained for free through an AI algorithm (“What is the most popular brunch recipe in the NYT using eggs, bacon and spinach”, or “What does Thomas Friedman think of…..”), this creates a huge disincentive to go to the source and undermines journalism’s business model, already under severe stress and threat.

The Times is one of the few journals that has managed to thrive, relatively speaking, in the new digital age at a time when many of its competitors are dying on the vine. According to Press Gazette, the New York Times is the leading paywalled news publisher, with 9.4 million subscribers. (Wall Street Journal and Washington Post are numbers two and three respectively). You need to pay to read the Times, and why not? But paying for access does not give you the right to copy the content, especially for commercial purposes. (The Times offers various licensing agreements for reproduction of its content, with cost dependent on use). Technically, all it takes is one subscription from OpenAI and the content of the Times is laid bare to the reproduction machines, the “large language models”, or LLMs, used by the AI developers. The Times has now thrown down the gauntlet. Its legal complaint, 69 pages long, makes compelling reading. If there ever was a “smoking gun” putting the spotlight directly on the holus-bolus copying and ingestion of copyright protected proprietary content in order to produce an unfair directly-competing commercial product that harms the original source, this is it. It’s a far cry from earlier copyright infringement cases brought by some artists and writers.

While you can read the complaint yourself if you are interested (recommended reading), let me tease out a few of the highlights. After setting out the well-proven case for the excellence of its journalism, the Times’ complaint notes that while the defendants engaged in widespread copying from many sources, they gave Times’ content particular emphasis when building their LLMs, thus revealing a preference that recognized the value of that content. The result was a free ride on the journalism produced at great expense by the Times, using Times’ content to build “substitutive products” without permission or payment.

Not only does ChatGPT at times regurgitate the Times’ content verbatim, or closely summarizes it while mimicking its style, at other times it wrongly attributes false information to the Times. This is referred to in AI circles as “hallucination”, something the complaint labels misinformation that undermines the credibility of the Times’ reporting and reputation. Hallucination is a particularly dangerous element of AI produced content. Rather than admitting it doesn’t know the answer, the AI algorithm simply makes it up, complete with false references and attributions all of which make it very difficult for the average reader to separate fact from fiction. This misinformation is the basis of the Times’ complaint for trademark dilution that accompanies various other copyright related complaints of infringement. Concrete examples of such misinformation are provided in the complaint.

So too is ample evidence of users exploiting ChatGPT to pierce the Times’ paywall, by asking for the completion of stories that have been blocked for non-subscribers. There are concrete examples of carefully researched restaurant and product reviews that have been replicated virtually verbatim. Not only is the Times’ subscription model undermined, but the value it derives from reader-linked product referrals from its own platform bleeds to Bing when the product is accessed through Microsoft Search enabled by ChatGPT. Examples are given of full news articles based on extensive Times’ investigative reporting being reproduced by ChatGPT, with only the slightest variations. These are not composite news reports of what is happening in Gaza, for example, but a word-for- word lifting of a Times’ analysis of what Hamas knew about Israeli military intelligence. The Times’ complaint makes for chilling reading. AI’s hand has been caught firmly in the cookie jar.

What does the Times want out of all of this? The complaint does not specify a dollar amount, while noting the billions in increased valuation that has accrued to OpenAI and Microsoft as a result of ChatGPT. However, it asks for statutory and compensatory damages, “restitution, disgorgement, and any other relief that may be permitted by law or equity” as well as destruction of all LLM models incorporating New York Times’ content, plus, of course, costs. If the Times gets its way, this will be a huge setback for AI development as well as for OpenAI and Microsoft, but of course it may not come to that. The complaint notes that the Times had tried to reach a licensing deal with the defendants. OpenAI cried foul, expressing “disappointment”, and noting that they had been having “productive” and “constructive” discussions with the Times over licensing content. However, to me this is a bit like stealing the cookies, getting caught red-handed and offering to negotiate to pay for them, then crying foul when your offer is rebuffed. The Times has just massively upped the ante, making the potential licensing fees much more valuable.

The irony is that the use of NYT material by OpenAI or indeed other platforms like Google or Facebook potentially brings some advantage and drives some business to the Times, while obviously also providing commercial benefits to the AI program, search engines or social media platforms. The real question will be how that proprietary content is used, and how much is paid to use it. A similar issue is being played out in another context, most recently in Canada with Bill C-18 where news media content providers wanted the big platforms (Google and Meta/Facebook) that derive benefit from using or indexing that content to pay for accessing it. The result in Canada was both a standoff and a compromise. Facebook blocked Canadian news content rather than pay for it, while Google agreed to create a fund for access by the news media in return for being exempted from the Canadian legislation.

The NYT-OpenAI/Microsoft lawsuit is a different iteration of the same principle. Businesses that gain commercial advantage from using proprietary content of others should contribute to the creation of that content, either through licensing or some other means such as a media fund. The most logical outcome of the Times’ lawsuit is almost certainly going to be a licensing agreement. Given the seemingly unstoppable wave of AI development, meaningful licensing agreements would seem to be the best way to ensure fairness and balance of interests going forward.  

A Goliath like the New York Times is in a much better position to make this happen than a disparate group of writers and artists. Indeed, there are logistical challenges in being able to license the works of tens of thousands of content creators. In an earlier blog post, I postulated that perhaps copyright collectives might find a role for themselves in this area in future. In my view, ultimately the only logical solution to the conundrum of respecting rights-holders while facilitating the development of AI is to find common ground through fair and balanced licensing solutions. The wrestling giants of the NYT and Microsoft may help show the way.

© Hugh Stephens 2024. All Rights Reserved.