The Economics of Copyright: Incentives and Rewards (It’s Important to Get them Right)

Image: Shutterstock

Two years ago, in April 2022, the US Copyright Office (USCO) appointed its first Chief Economist, Dr. Brent Lutes. Many national Intellectual Property Offices have such a position, e.g, UK IPO, IP Australia, EUIPO, and WIPO. (Notably, Canada’s Intellectual Property Office–CIPO–does not). All these positions have broad responsibility for assessing the economics of IP generally, covering patent, trademark, industrial designs as well as copyright. In the US, the Patent and Trademark Office has its own Chief Economist. However, Lutes’ USCO position appears to be the only one related exclusively to assessing the economic impact of copyright. The position sits within the Office of Policy and International Affairs and is composed of a small team of economists, providing the Register of Copyrights, Shira Perlmutter, with policy-relevant research on economic issues related to copyright.

In an interview conducted last month, Lutes talked about the economic goals of copyright in terms of enhancing social welfare. He noted the goal of copyright is to contribute to the welfare of society by promoting access to creative works, now and in the future, through market based behavioural incentives. The goal of the Office of Chief Economist is to gather more information to inform policy making, such as the geographic distribution of copyright activity or the demographic characteristics of creators. As but one example, is racial or ethnic diversity related to creativity? The economic issues surrounding AI and copyright, both pro and con, is another field of research the USCO will be exploring.

In addition to finding the right economic levers to stimulate production of creative works, economic studies of copyright also demonstrate the enormous impact copyright-based industries have on national economic welfare. While the impact can depend on what economic multipliers are used and how direct versus indirect benefits are calculated, there is no question that copyright industries in most economies are very significant as job creators and multipliers. For example, IP Australia in its most recent annual report estimates that cultural and creative activity contributes about 6% of Australian GDP annually, with design, fashion, publishing, broadcasting, electronic and digital media and film being the primary industries involved. In the US the figures are even more impressive. According to the International Intellectual Property Alliance, in 2021 (the last year for which statistics are apparently available), core copyright industries in the US, defined as those industries “whose primary purpose is to create, produce, distribute or exhibit copyright materials”, added $1.8 trillion to US GDP, accounting for 7.76% of the economy. Total copyright industries, a definition that includes industries partially dependent on copyright, such as fabric, jewellery or toys and games, account for another trillion USD, even when only a portion of their total value is included in the copyright calculation.  

The UK Intellectual Property Office published its IP survey in 2022, comparing the role of patents, trademark, registered industrial designs and copyright. While copyright industries were on the low side for exports (£4.7 billion as opposed to patents at £120.6 billion, copyright’s “non-financial value-added output” (IP data is not available for the financial industries, thus the description of “non-financial”) trounced that of patent industries by almost 2:1. As with the US IIPA study, the UK report accounted for the degree to which certain industries depend on copyright, categorizing them as core, interdependent, partial or non-dedicated support industries, adjusting the amount of copyright contribution accordingly. Book publishing, for example, is considered a 100% copyright industry and its value is calculated as such, whereas for an industry such as paper manufacturing, only 25% of the value was included in the calculation of copyright benefits. This methodology followed that of the World Intellectual Property Organization, aka WIPO, which also conducts economic studies as well as assists national authorities with their own. Economists are careful people, not prone to exaggeration, and consistent methodology is important to ensure accurate measurement and reporting.

WIPO worked with the Department of Canadian Heritage to produce a report in 2020 on “The Economic Impact of Canada’s Copyright-Based Industries”. As with other deep dives on the economic benefits of copyright, this study produced similar notable statistics. For example, while many copyright opponents in Canada were deploring the extension of the copyright term of protection in Canada, arguing that the result would be an outflow of royalties to foreign rights-holders because Canada was a net importer of copyrighted materials, the Heritage report established that “Canada has exported more copyright-related services than it has imported, maintaining a trade balance surplus from 2009 ($2.5 billion) to 2019 ($5.6 billion)”. In actual fact, extending the copyright term in Canada brought with it the additional benefit of a reciprocal extended term in many foreign countries for Canadian works, clearly benefiting Canadian rights-holders. The Heritage study went on to document a range of other important outcomes such as employment (over 600,000), contribution to GDP ($95.6 billion) and percentage of GDP (4.9%). All figures are based on 2019 data. No update has been published since. It is just as well that Heritage Canada took the lead in preparing this report since the government department holding lead statutory responsible for copyright in Canada, the mammoth Department of Industry, Science and Economic Development (ISED), unfortunately seems to treat copyright as but a tiny pimple on its elephantine rump.

While the studies cited above highlight the economic contribution that copyright industries make to national economies in terms of jobs and wealth generation, let us not forget the key point that Dr. Lutes underlined regarding the social welfare contribution of copyright through using market-based incentives to promote and encourage creativity and investment in creative outputs. It is hard, if not impossible, to put a dollar amount on the social welfare benefits of creative expression and cultural sovereignty, but they are immense if incalculable. Without copyright, not only would existing content-based industries be unable to thrive and expand, but the formula to encourage new, original content would be missing.

Notwithstanding the importance of a robust copyright framework for both economic and social welfare, creators and content-based copyright industries are facing major challenges today. Some are technological, like the emergence of generative AI; some behavioural, such as a wide tolerance, even acceptance, of piracy and free riding. The struggle against piracy is ongoing and protracted, a cat and mouse game. Free riding is what AI developers are doing on the backs of content creators through unauthorized training of AI models on copyrighted content, with resultant legal challenges. There is also the question of whether wholly AI generated works should be accorded copyright protection. As the Copyright Alliance has observed, the Copyright Clause in the US Constitution is premised on the promotion of the “progress of science and useful arts” by protecting for a limited period of time the writings and discoveries of authors and inventors. Given that premise, it should be self-evident that creator incentivization is not applicable to machines, which do not need nor comprehend economic incentives to create.

Free riding is also what the education sector has been doing in Canada under the specious umbrella of “education fair dealing”, introduced through copyright amendments in 2012 that broadened the scope of fair dealing. Since then, the “education industry” at the public, secondary and post-secondary level has been siphoning off economic value from writers and other creators to the tune to date of over CAD$200 million. Their legalized renunciation of collective reprographic licensing is ostensibly to benefit students but is in fact a transfer of wealth from creators to the bottom line of educational institutions. If a key objective of copyright is to incentivize creation of new content, such as materials used by educational institutions to teach students, then the current interpretation of education fair dealing in Canada upends a key rationale for granting copyright protection in the first place. (As a footnote, I should add that not all arguments in favour of copyright are based solely on economic incentives. There is also the question of natural justice and equity, providing authors with a degree of control over works they have created).

Since court challenges have unfortunately proven ineffective, the remedy for Canada’s education fair dealing fiasco is for the Government of Canada to amend the Copyright Act so that rightsholders are properly compensated when their works are used in Canada. Both the copyright collective in English Canada, Access Copyright and its Québec counterpart, Copibec, recently called for legal clarification of the nature and extent of educational fair dealing.

Thorough documentation of the contribution that copyright makes to economic and social welfare helps substantiate the case for adequate legal frameworks, including combatting piracy and ending copyright free riding. Sound economic data are essential to sound policy making. The initiative of the US Copyright Office to appoint a Chief Economist helps to meet these goals and is to be commended.  Should the Canadian Intellectual Property Office ever create such a position, its first task should be to evaluate the full economic and social costs of the current short-sighted interpretation of fair dealing in Canada’s education sector in terms of its negative long-term impact on creativity and cultural sovereignty in the country.

The Scottish writer Thomas Carlyle may have described economics as the “dismal science”, an oft-quoted remark, but rather than being dismal it is in fact just the opposite; it sheds light on the importance of copyright to maintaining a well-functioning, equitable and culturally rich modern society.

© Hugh Stephens, 2024. All Rights Reserved.

Artificial Intelligence and Copyright: The Canadian Cultural Community Speaks Out

Image: http://www.shutterstock.com

The extended period set by the Canadian Government (through Innovation, Science and Economic Development Canada, ISED) for response to its consultation paper on Artificial Intelligence (AI) and Copyright closed on January 15. We will start to see a flurry of submissions released by participants while ISED digests and assesses the input it has received. One of the first is the submission from the Coalition for the Diversity of Cultural Expression (CDCE), which represents over 360,000 creators and nearly 3,000 cultural businesses in both French and English-speaking parts of Canada. CDCE’s membership includes organizations representing authors, film producers, actors, musicians, publishers, songwriters, screenwriters, artists, directors, poets, music publishers—just about every profession you can think of that depends on creativity, and protection for creative output. The CDCE submission highlights three key recommendations, summarized as follows;

  • No weakening of copyright protection for works currently protected (i.e. no exception for text and data mining to use copyrighted works without authorization to train AI systems)
  • Copyright must continue to protect only works created by humans (AI generated works should not qualify)
  • AI developers should be required to be transparent and disclose what works have been ingested as part of the training process (transparency and disclosure).

While none of these recommendations are surprising, and from my perspective are eminently reasonable, I am sure we will also see a number of submissions arguing that, “in the interests of innovation”, access to copyrighted works is not only essential but should be freely available without permission or payment. OpenAI, the motive force behind ChatGPT—and the defendant in the most recent high-profile copyright infringement case involving AI (When Giants Wrestle, the Earth Moves (NYT v OpenAI/Microsoft)—has already staked out part of this position. In its brief to the UK House of Lords Select Committee looking into Large Language Models (LLMs), a key technology that drives AI development, the company says;

“Because copyright today covers virtually every sort of human expression–including blog posts, photographs, forum posts, scraps of software code, and government documents–it would be impossible to train today’s leading AI models without using copyrighted materials (emphasis added). Limiting training data to public domain books and drawings created more than a century ago might yield an interesting experiment, but would not provide AI systems that meet the needs of today’s citizens.”

OpenAI claims that it respects content creators and owners and looks forward to continuing to work with them, citing among other things, the licensing agreement for content it has signed with the Associated Press. But failure to reach a licensing deal with the New York Times is really the crux of the lawsuit that the media giant has brought against OpenAI and its key investor Microsoft. If reports are true that OpenAI’s licensing deals top out at $5 million annually, it is not surprising that licensing negotiations between the Times and OpenAI broke down over such lowball offerings.

As for the CDCE submission to ISED, it recommends that the government refrain from creating any new exceptions for text and data mining (TDM) since this would interfere with the ability of users and rightsholders to set the boundaries of the emerging market in licensing. No copyright exemption for AI is what the British government has just confirmed, after playing footsie with the concept for over a year. Apart from the examples of the licensing deals that OpenAI has with the Associated Press and German multimedia giant Axel Springer, the CDCE paper notes a range of other recent examples of content owners offering access to their product through licensing arrangements, including Getty Images, Universal Music Group and educational and scientific publishers like Elsevier. The paper also urges the government to avoid interfering in the market when it comes to setting appropriate compensation, leaving it to market players or, where the players can’t reach agreement, to the quasi-judicial Copyright Board.

In my view, licensing is going to be the solution that will eventually level the playing field, but to get there it will require that major content players lockout the AI web-crawlers while pursuing legal redress, as the NYT is doing. This will help to open the licensing path to smaller players and individual creators who don’t have the resources available to employ either technical or legal remedies. (The issue of what has already been ingested without authorization still needs to be settled). As for the tech industry’s suggestion that creators can opt-out of content ingestion if they wish, CDCE rightly points out that this is standing the world on its head, and would be contrary to longstanding copyright practice. Not only is it impractical in a world where what goes into an AI model is a black box (thus the imperative for transparency) but it is like saying a homeowner has to request not to be burgled, or else can expect to become a target.

On the question of whether AI generated works should be granted copyright protection, CDCE points out the double-standard of proposing an exception to copyright for TDM for inputs while claiming copyright protection for AI generated outputs. The need for human creativity is a line that has been firmly held by the US Copyright Office, pushing back on various attempts to register AI-generated (as opposed to AI-assisted) works. Canada has not been quite so clear cut in its position, owing to the way in which copyright is registered (almost by default, without examination) in Canada, as I pointed out in this blog post (A Tale of Two Copyrights). While AI generated works have received copyright protection in Canada (Canadian Copyright Registration for my 100 Percent AI-Generated Work), this is more by oversight than design, given the way the Canadian copyright registration system works.

Thirdly, we turn to transparency, a sine qua non if licensing solutions are to be implemented.  If authors don’t know whether their works are being used to train AI algorithms, or can’t easily prove it, licensing will fall flat. CDCE calls for publication of all content ingested into training models, disclosure of any content outputs that contain AI, and design of AI models to prevent generation of illegal or infringing content. This is similar to requirements already under consideration in the EU.

CDCE also makes the important point that it is not just copyright legislation that defends individual and collective rights against the incursions of AI and big AI platforms. While the Copyright Act offers some protection to creators, privacy legislation is important for all citizens. As the UK Information Commissioner has pointed out in a recent report, the legal basis for web-scraping is dependent on (a) not breaching any laws, such as intellectual property or contract laws and (b) conformity with UK privacy laws (the GDPR, or General Data Protection Regulation), where the privacy rights of the individual may override the interests of AI developers, even if data scraping meets other legitimate interest tests.

Finally, there is the question of the moral rights of creators that can be threatened by misapplication of AI, whether it is infringement of a performer’s personality or publicity right, distortion of their performance or creative output, misuse of their works for commercial or political reasons or any of the other reasons why copyright gives the creator the right to authorize use of their work.

Quite apart from the question of AI, there are of course other outstanding copyright questions that need to be resolved urgently, including the longstanding issue of the ill-conceived education “fair dealing” exception that has undermined if not permanently damaged the educational publishing industry in Canada. This exception needs to be narrowed to allow users continued unlicensed access to copyrighted materials under fair dealing guidelines for study, research and educational purposes but to limit institutional use to situations only where a work is not commercially available under a license from a rightsholder or collective society. While this issue requires looking back and fixing something that is already broken, policy making with respect to AI and copyright needs to anticipate the future and “do no harm”, while requiring AI developers to open up their black boxes and respect existing rights. This should be achieved by maintaining and protecting the rights of creators in ways that will facilitate market-based licensing solutions for use of copyrighted content by AI developers, while ensuring that creative output remains the domain of human beings, and not machines.

© Hugh Stephens, 2024.

When Giants Wrestle, the Earth Moves (NYT v OpenAI/Microsoft)

Image:www.shutterstock.com

There is no better way to start out the New Year, 2024, with a commentary on Artificial Intelligence (AI) and copyright. It was the big emerging issue in 2023 and is going to be even bigger in 2024. The unlicensed and unauthorized reproduction of copyright-protected material to train AI “machines”, in the process often producing content that directly competes in the market with the original material, is the Achilles heel of AI development. To date, no one knows if it is legal to do so, in the US or elsewhere, as the issue is still before the courts. The cases brought to date by artists, writers and image content purveyors like Getty Images have not always been the strongest or best thought out. In one instance, the plaintiffs had not even registered the copyright on some of the works for which they were claiming infringement, a fatal flaw in the US where registration is a sine qua non in order to bring an infringement case. That may have been the most egregious example of a rookie error but in general the artists’ and writers’ cases have not gone too well so far, although the process continues. Some cases are on stronger grounds than others. Here is a good summary. The Getty Images case will be an interesting one to watch. And now the New York Times has weighed in with a billion-dollar suit against Open AI, and Microsoft. The big guys are now at the table and the sleeves are rolled up. The giants are wrestling.

What is at issue could be nothing less than the survival of the news media and the ability of individual creators to protect and monetize their work. It could also open a pathway to legitimacy for the burgeoning AI industry. The ultimate solution is surely not to put a halt to AI development, nor to put content creators out of business. It is to find a modus vivendi between the needs of AI developers to ingest content in order to train algorithms that will “create” (sort of) content–assembled from vast swathes of input–and the rights of content creators. While training sets are generally very large, some of the input can be very creator-specific and the output very creator-competitive. This is where the New York Times comes in.

The Times, like any enterprise, needs to be paid for the content it creates in order to stay in business and create yet more content. If its expensively acquired “product”, whether news, lifestyle, cooking, book reviews or any of the other content that Times’ readers crave and are willing to pay for, can be obtained for free through an AI algorithm (“What is the most popular brunch recipe in the NYT using eggs, bacon and spinach”, or “What does Thomas Friedman think of…..”), this creates a huge disincentive to go to the source and undermines journalism’s business model, already under severe stress and threat.

The Times is one of the few journals that has managed to thrive, relatively speaking, in the new digital age at a time when many of its competitors are dying on the vine. According to Press Gazette, the New York Times is the leading paywalled news publisher, with 9.4 million subscribers. (Wall Street Journal and Washington Post are numbers two and three respectively). You need to pay to read the Times, and why not? But paying for access does not give you the right to copy the content, especially for commercial purposes. (The Times offers various licensing agreements for reproduction of its content, with cost dependent on use). Technically, all it takes is one subscription from OpenAI and the content of the Times is laid bare to the reproduction machines, the “large language models”, or LLMs, used by the AI developers. The Times has now thrown down the gauntlet. Its legal complaint, 69 pages long, makes compelling reading. If there ever was a “smoking gun” putting the spotlight directly on the holus-bolus copying and ingestion of copyright protected proprietary content in order to produce an unfair directly-competing commercial product that harms the original source, this is it. It’s a far cry from earlier copyright infringement cases brought by some artists and writers.

While you can read the complaint yourself if you are interested (recommended reading), let me tease out a few of the highlights. After setting out the well-proven case for the excellence of its journalism, the Times’ complaint notes that while the defendants engaged in widespread copying from many sources, they gave Times’ content particular emphasis when building their LLMs, thus revealing a preference that recognized the value of that content. The result was a free ride on the journalism produced at great expense by the Times, using Times’ content to build “substitutive products” without permission or payment.

Not only does ChatGPT at times regurgitate the Times’ content verbatim, or closely summarizes it while mimicking its style, at other times it wrongly attributes false information to the Times. This is referred to in AI circles as “hallucination”, something the complaint labels misinformation that undermines the credibility of the Times’ reporting and reputation. Hallucination is a particularly dangerous element of AI produced content. Rather than admitting it doesn’t know the answer, the AI algorithm simply makes it up, complete with false references and attributions all of which make it very difficult for the average reader to separate fact from fiction. This misinformation is the basis of the Times’ complaint for trademark dilution that accompanies various other copyright related complaints of infringement. Concrete examples of such misinformation are provided in the complaint.

So too is ample evidence of users exploiting ChatGPT to pierce the Times’ paywall, by asking for the completion of stories that have been blocked for non-subscribers. There are concrete examples of carefully researched restaurant and product reviews that have been replicated virtually verbatim. Not only is the Times’ subscription model undermined, but the value it derives from reader-linked product referrals from its own platform bleeds to Bing when the product is accessed through Microsoft Search enabled by ChatGPT. Examples are given of full news articles based on extensive Times’ investigative reporting being reproduced by ChatGPT, with only the slightest variations. These are not composite news reports of what is happening in Gaza, for example, but a word-for- word lifting of a Times’ analysis of what Hamas knew about Israeli military intelligence. The Times’ complaint makes for chilling reading. AI’s hand has been caught firmly in the cookie jar.

What does the Times want out of all of this? The complaint does not specify a dollar amount, while noting the billions in increased valuation that has accrued to OpenAI and Microsoft as a result of ChatGPT. However, it asks for statutory and compensatory damages, “restitution, disgorgement, and any other relief that may be permitted by law or equity” as well as destruction of all LLM models incorporating New York Times’ content, plus, of course, costs. If the Times gets its way, this will be a huge setback for AI development as well as for OpenAI and Microsoft, but of course it may not come to that. The complaint notes that the Times had tried to reach a licensing deal with the defendants. OpenAI cried foul, expressing “disappointment”, and noting that they had been having “productive” and “constructive” discussions with the Times over licensing content. However, to me this is a bit like stealing the cookies, getting caught red-handed and offering to negotiate to pay for them, then crying foul when your offer is rebuffed. The Times has just massively upped the ante, making the potential licensing fees much more valuable.

The irony is that the use of NYT material by OpenAI or indeed other platforms like Google or Facebook potentially brings some advantage and drives some business to the Times, while obviously also providing commercial benefits to the AI program, search engines or social media platforms. The real question will be how that proprietary content is used, and how much is paid to use it. A similar issue is being played out in another context, most recently in Canada with Bill C-18 where news media content providers wanted the big platforms (Google and Meta/Facebook) that derive benefit from using or indexing that content to pay for accessing it. The result in Canada was both a standoff and a compromise. Facebook blocked Canadian news content rather than pay for it, while Google agreed to create a fund for access by the news media in return for being exempted from the Canadian legislation.

The NYT-OpenAI/Microsoft lawsuit is a different iteration of the same principle. Businesses that gain commercial advantage from using proprietary content of others should contribute to the creation of that content, either through licensing or some other means such as a media fund. The most logical outcome of the Times’ lawsuit is almost certainly going to be a licensing agreement. Given the seemingly unstoppable wave of AI development, meaningful licensing agreements would seem to be the best way to ensure fairness and balance of interests going forward.  

A Goliath like the New York Times is in a much better position to make this happen than a disparate group of writers and artists. Indeed, there are logistical challenges in being able to license the works of tens of thousands of content creators. In an earlier blog post, I postulated that perhaps copyright collectives might find a role for themselves in this area in future. In my view, ultimately the only logical solution to the conundrum of respecting rights-holders while facilitating the development of AI is to find common ground through fair and balanced licensing solutions. The wrestling giants of the NYT and Microsoft may help show the way.

© Hugh Stephens 2024. All Rights Reserved.

AI’s Copyright Challenges: Searching for an International Consensus

Image: Shutterstock

This has been a busy couple of weeks for national and international declarations on Artificial Intelligence (AI). First the G7 issued its International Code of Conduct for Advanced AI Systems on October 30.  The same day US President Biden signed the Executive Order on the Safe, Secure, and Trustworthy Development and Use of Artificial Intelligence, followed by the Bletchley Declaration at the conclusion of the “AI Summit” hosted by UK Prime Minister Rishi Sunak in Bletchley Park, Buckinghamshire, a couple of days later. Meanwhile, the EU’s AI Act is being touted by its sponsor as a potential model for AI legislation in other parts of the world (although its enactment is currently bogged down in the trilogue process between the Commission, EU Council and European Parliament). Notable was the fact that the US Executive Order, a wide-ranging framework document covering many aspects of the AI issue, effectively “scooped” the Brits by a day or so, allowing Vice President Kamala Harris to highlight steps the US had just announced when speaking to the press at Bletchley.

The declarations all addressed many of the concerns surrounding AI, ranging from safety and security, fraud and cybersecurity to privacy, equity and civil rights to protecting consumers, supporting workers and promoting innovation. A key issue only lightly touched on in these declarations, however, was that of AI’s intersection with copyright. This was a missed opportunity to come to grips with a major concern regarding how AI will be able to co-exist with copyright law. (The EU’s draft AI Act includes a transparency requirement to  “document and make publicly available a sufficiently detailed summary of the use of training data protected under copyright law“, Article 28(b) 4(c).)

AI faces two significant challenges when it comes to copyright protection. First, with respect to the inputs that AI developers use to populate their models to produce generative AI, there is the unresolved question as to whether the free use of copyrighted content violates copyright law by making unauthorized reproductions. There are currently a number of lawsuits underway in the US examining this fundamental question. Many creator groups, such as the News Media Alliance in the United States argue that “the pervasive copying of expressive works to train and fuel generative artificial intelligence systems is copyright infringement and not a fair use”.

Second, with respect to outputs, the work generated by AI has two challenges in terms of obtaining the benefits of copyright protection. If its inputs are infringing, that clearly casts doubt on the legality of the derivative outputs. In addition, there is the problem posed by the current position of the US Copyright Office (and most other copyright authorities) that to be copyright-protected a work must be an original human creation. After the infamous Monkey Selfie case, the USCO issued an interpretive bulletin reiterating the need for human authorship and, to date, it has hewed to this line when examining applications for copyright registration from authors claiming works produced by AI.

The G7 Declaration was broad, covering a wide range of issues related to AI. It included a reference to the copyright issue under Point 11, “Implement appropriate data input measures and protections for personal data and intellectual property”, specifically stating that, “Organizations are encouraged to implement appropriate safeguards, to respect rights related to privacy and intellectual property, including copyright-protected content.” This is hardly prescriptive language, but it is a beginning. I understand that the creative community had to fight hard to get this wording included, but it is at least recognition of the issue.

With respect to the US Administration’s Executive Order, the issue of copyright was also acknowledged, but in a somewhat backhanded way. Section 5.2 (Promoting Innovation), addresses copyright as part of clarifying issues “related to AI and inventorship of patentable subject matter”. Paragraph (c)(iii) declares that the Under Secretary of Commerce for Intellectual Property and Director of the US Patent and Trademark Office shall;

within 270 days of the date of this order or 180 days after the United States Copyright Office of the Library of Congress publishes its forthcoming AI study that will address copyright issues raised by AI, whichever comes later, consult with the Director of the United States Copyright Office and issue recommendations to the President on potential executive actions relating to copyright and AI. The recommendations shall address any copyright and related issues discussed in the United States Copyright Office’s study, including the scope of protection for works produced using AI and the treatment of copyrighted works in AI training.”  

This is not exactly a ringing endorsement of the need for respecting the copyright of those who, willingly or not, provide the raw material for the voracious AI machines that are busy scooping up creator’s content, but it is nonetheless an acknowledgment that there’s an issue that needs addressing.

The US Copyright Office (USCO) launched its study on Artificial Intelligence and Copyright on August 30 of this year “to help assess whether legislative or regulatory steps in this area are warranted”. By the end of October, the USCO had already received more than 10,000 submissions. The comments range from statements by AI developers as to why they shouldn’t be required to pay for copyrighted content used as inputs in developing their models (while of course claiming they should enjoy the benefits of copyright protection for their AI generated outputs), to submissions by creator organizations that argue, among other things, that the ingestion of copyrighted material by AI systems is not categorically fair use and that AI companies should license works they ingest. Licensing their content to AI companies as an additional revenue stream is precisely what major media companies are currently engaged in.

If the US, currently and for the foreseeable future the leading country in development of AI, is thrashing around trying to address this question, one can imagine the process taking place elsewhere. Will the need to set standards inevitably lead to some form of international consensus for the regulation of AI, including the role of copyrighted content? I think it will be essential. Countries that are too lax in protecting their creative sectors will see their copyright-protected cultural industries suffer negative economic consequences; countries that are overly protective of content are worried that investment in AI innovation will flow to countries with lower copyright standards, becoming a race to the bottom for creators.

The UK government has already felt the pinch of this dilemma. In a misguided attempt to gain a head start in the AI development race, about a year and a half ago the British government unveiled a proposal sponsored by the UK Intellectual Property Office (of all entities!) to create an unlimited text and data mining (TDM) exception to copyright, at the same time stripping rights-holders of their ability to license their contact for TDM purposes, or to contract or opt out. In the words of the discussion paper accompanying the draft legislation, in order to reduce the time needed to obtain permission from rightsholders and to eliminate the need to pay license fees;

The Government has decided to introduce a new copyright and database right exception which allows TDM for any purpose …Rights holders will no longer be able to charge for UK licences for TDM and will not be able to contract or opt-out of the exception.”

This outrageous attempted expropriation of intellectual property rights aroused a storm of protest from the UK’s vibrant cultural sector, a backlash that found resonance in Parliament. As a result, the British government backed off, and withdrew the proposed legislation. However, one wonders if the stake has truly been driven through the heart of this hi-tech gambit or whether, like Dracula, this misguided policy will rise again. UK Parliamentary Committee Shoots Down Copyright Exemption for AI Developers–But is it Really Dead”? Certainly, British publishers are not convinced the content grab is over. According to the Guardian, they have just issued a statement urging the UK government, “to help end the unfettered, opaque development of artificial intelligence tools that use copyright-protected works with impunity.”

Canada has just launched a public consultation on AI and Copyright, (”Copyright in the Age of Generative Artificial Intelligence”), and others will be doing the same. In Australia, Google, responding to a review of copyright enforcement, urged the government to relax copyright laws to allow artificial intelligence to mine websites for information across the internet (even though this wasn’t the topic of the enquiry). Meanwhile, the Attorney-General’s Department has been conducting several roundtables to explore the issue, the most recent being at the end of August. In that roundtable, representatives of the Australian creative community called for greater transparency around how copyright material is being used by AI developers during the input training and output process.

And so, the search for the right formula goes on. It will not be easy to find the elusive international consensus, especially since at the moment (with the exception of China) this is an issue on the agenda only of the so-called Global North.

How the heavy-hitters will deal with the issue of AI, including its intellectual property dimensions, remains to be seen. There could be something as relatively powerless as OECD Guidelines that emerge or regulation could go a lot further, including the establishment of some kind of international agency with the “authority” to regulate in the area of AI, as suggested by Elon Musk and others. However, as we have seen with every international organization created to date, whether it be the UN, World Trade Organization, International Atomic Energy Agency, or any of the myriad other supra-national structures created in recent years, the authority granted them is only as good as the commitment of its signatory states. It makes sense to harmonize and set broad international standards for the way in which AI is created and used, but it will be a long road to get there.

The challenge of how copyright can intersect with AI–to the mutual benefit of both–has still be worked out. The courts are playing a role, as is technology, evolving business models, and legislation. Society needs to find the sweet spot where both human creation and technological advancement in the form of AI can co-exist for the benefit of society at large. Despite recent pronouncements, the search continues.

© Hugh Stephens, 2023. All Rights Reserved.

This post has been updated to include reference to the ongoing roundtable process underway in Australia under the aegis of the Attorney-General’s Department to explore, inter alia, questions of AI and copyright.