If AI Tramples Copyright During its Training and Development, Should AI’s Output Benefit from Copyright Protection? Part Two: Jason Allen

Image: Théâtre D’Opéra Spatial, Jason B. Allen (not protected by copyright)

Last week I wrote about Stephen Thaler’s quixotic and determined approach to obtain copyright registration in the US for his AI generated artwork, “A Recent Entrance to Paradise”, created (he claims) exclusively by his AI “machine”, the so-called Creativity Machine. So far, despite repeated efforts, he has drilled a dry hole. An alternative approach to claiming copyright for an AI-generated work is by asserting that the AI used to produce it was simply a technological assist. The essence of the work was produced through human creativity, using AI only as a tool, and therefore the work should be eligible for copyright protection, or so the argument goes. Unlike the example of Kristina Kashtanova, discussed in last week’s blog, under this theory the entire work is protectable because it was human created, with AI playing only a facilitating or assistive role. This line of attack has most recently been pushed by the creator of the work “Théâtre D’Opéra Spatial”, Jason B. Allen.

Allen made headlines a couple of years back (September 2022) when he entered Théâtre into the Colorado State Fair’s annual art competition in the category of “digital art/digitally manipulated photography.” He labelled the piece as having been created by him, “via Midjourney”, the popular generative AI art algorithm that had recently been released. He won first prize, incurring the opprobrium of many artists who accused him of crashing a contest for human creators. Writing just a month later, in October of 2022, I posted my own AI produced artwork, based on the style of Monet (whose works are in the public domain) in this blog post, (AI and Computer-Generated Art: Its Impact on Artists and Copyright). My effort was substantially less artistic than Allen’s but was an original work of sorts, created with the help of AI. I used the program DALL-E2, which is similar to Midjourney. Both were freely available and a tool that any rank amateur “artist” (like me) could use.

Generative AI as a source of art burst into the public’s consciousness in 2022 because of the public release of these programs, but AI generated art has been around for a few years before that although used exclusively by art specialists. The New York Times reports on a sale at Christie’s Auction House four years earlier, in October 2018 (not exactly eons ago, but generations in internet/AI time). A portrait with blurred and distorted lines produced by an AI algorithm sold in New York for $432,500 (with fees). Christie’s, in inimitable auctioneering style, billed it as “the first portrait generated by an algorithm to come up for auction”, according to the Times. Now AI generated art is a dime a dozen. In fact, often it is hard to tell what is AI generated and what is not.

If the question of copyright protection for AI generated content is a big issue, an even bigger one is currently being played out currently in the courts; content owners, ranging from the New York Times to Getty Images to music labels to authors, are suing various AI development companies for unlicensed use of their content to train AI programs. The Copyright Alliance has a good summary of the various lawsuits in play here. The ultimate outcome is undecided, but if the courts find that the wholesale unlicensed and unauthorized ingestion of copyrighted content to train AI algorithms is not fair use (in the US) or does not fall within specified text and data mining exceptions in other jurisdictions, then the table will be set for serious negotiation between rights holders and AI developers. Some of the parameters of this negotiation are already pretty obvious;

  • a transparent inventory of what copyrighted works were accessed for training;
  • the ability of rights holders to be able to opt out or opt in;
  • various options for licensing content for training purposes.

If these conditions governing inputs were met, rights holders might be somewhat more sympathetic to arguments for copyrighting the output of generative AI programs. As it is, AI developers, and the users of AI programs, want to have their cake and eat it too. Jason B. Allen of Théâtre D’Opéra Spatial” is Exhibit No. 1.

Allen is currently appealing in court the US Copyright Office’s ruling that his work cannot be registered under copyright. He claims that because of all the publicity about his work, and the USCO’s subsequent decision to deny copyright registration on the grounds that it was an AI generated rather than human creation, the work has lost value and impacted his ability to charge industry-standard licensing fees. Moreover, he claims that without copyright protection, he has no ability to stop others from using his work without authorization. (Like me, posting Théâtre on this blog post). Apparently, people are selling copies of the work on Etsy. One has some sympathy for his position, as it is one faced by many artists whose copyrighted works are also being ripped off on the internet.

Allen argues he had substantial creative input into the production of the work, using no fewer than 624 prompts to create the work to his mental specifications. Of course, we have no idea of what those specifications were. What Midjourney produced may have been an accurate reflection of what Allen had in his mind and intentionally created, or it may have taken him on a journey where he eventually settled on the output offered. One thing is likely, if not certain. Were he to enter those exact same prompts into Midjourney today, the outcome would not be identical to the current “Théâtre D’Opéra Spatial”. This raises the question of who, exactly, is guiding the creative process, the artist making the prompts, or the algorithm responding to the prompts.

Because of the way AI works, there is a large degree of randomness in the results, requiring more and more precise prompts to narrow the range of possibilities and guide the algorithm to the desired destination. But it is almost impossible to recreate precisely the route to an outcome. This suggests to me that, in the end, it is the algorithm that is in control, not the human issuing the prompts. (Although not everyone agrees with this thesis). To my mind, this is what distinguishes AI-generated art from photography, where the photographer, while using a mechanical assist, is nevertheless in full control at all times and has the ability to adjust for extraneous inputs such as light, shadow etc. rather than being controlled by them.

Allen’s appeal of the USCO’s rejection of his copyright claim takes place against a backdrop where Midjourney, the AI program he used, is itself being sued by a group of artists for appropriating their work without permission in order to train Midjourney’s art generation algorithms. Does anyone see any irony in this? However, the fact that Midjourney takes the works of others without permission for training and development purposes is not really Allen’s fault. He and other users of the program could perhaps be considered victims almost as much as the artists whose works have been appropriated. Nevertheless, if Allen is ever successful in getting Théâtre registered, this will be not just to his benefit, but also to the benefit of Midjourney and all the other AI developers who are in a similar position. On the other hand, if the output of their programs cannot be copyright protected, it diminishes the value of the AI product. So, perversely, the more Allen pursues registration for Théâtre, the more he undercuts those who make a living from producing art.

I see one possible scenario that could help resolve the issue of AI outputs being unprotectable. If the key elements of respect for copyrighted work (transparent inventory, opt in/out, licensing) were to be adopted by AI developers, then perhaps rights-holders would be more amenable to accepting at least some degree of copyright protection for AI created or assisted outputs. But right now, the AI industry wants it both ways; total freedom to appropriate copyrighted works for training and development purposes while claiming the same copyright protection they have just trampled for AI generated outputs. Jason Allen and other digital artists who use AI to produce art or other works are caught in the middle.

At the moment there is no clear solution. The most likely outcome–after all the legal dust has settled—is probably going to be some ability to copyright works produced with AI, dependent on the extent and degree of human intervention in a given work (which could possibly be carefully tailored prompts), balanced by a commitment by the AI industry to recognize the property rights of those holding copyright over the content it is using to create the AI program in the first place. This will necessarily involve an appropriate sharing of the added value being produced by AI in the form of licensing fees. This will take a few more years, a few more lawsuits, a few court decisions, and a few government interventions in the form of legislation–but I can see no other way forward. In the interim, neither Stephen Thaler nor Jason Allen are likely to get what they want.

© Hugh Stephens 2024. All Rights Reserved.

If AI Tramples Copyright During its Training and Development, Should AI’s Output Benefit from Copyright Protection? Part One: Stephen Thaler

” A Recent Entrance to Paradise”, Stephen Thaler (not protected by copyright)

One of the ongoing debates about works made with generative AI is whether they qualify for copyright protection. Should they? Let’s consider the essence of copyright. What is its raison d’être? According to the classical European definition, it is to respect the property rights of the author (droit dauteur), sometimes described in the simple terms of the Eighth Commandment (“Thou shalt not steal”). According to the more utilitarian Anglo-Saxon rationale for copyright, it is to benefit society by rewarding and incentivizing authors, thus stimulating further production of works for the greater good. In either case, it gives one pause to wonder how works created by AI fit into either school of thought. Is there an inherent property right in content produced by an algorithm? How does copyright protection incentivize an algorithm to produce more “useful arts”?

In my view, the only way one can square this circle is by attributing human creation, or at least a degree of human creation, to AI generated works. But this opens Pandora’s box; if there is to be human attribution, to whom in the chain of creation does the credit fall? How much human creativity is required? It also raises the spectre of hypocrisy, as the AI industry hijacks the creative work of others, without recognition or recompense, yet has the gall to claim that AI outputs are unique and worthy of protection. I have written about these issues before (here, here, and here), but a couple of recent cases in the US have brought these fundamental issues bubbling back to the surface.

When it comes to trying to prove that AI and copyright protection go together, there are a couple of different approaches. One, most notably espoused by an AI technologist in the US named Stephen Thaler, is to claim that a given work was produced exclusively by AI but should nonetheless be protected. In Thaler’s case, the work for which he is seeking copyright registration was created by a particular AI “machine” (or algorithm), specifically the one he “invented”, the so-called “Creativity Machine”. Thaler claims his machine should hold the copyright but behind the machine, of course, stands Thaler. This is not dissimilar to existing British copyright law where, under Section 178 of the Copyright, Designs and Patents Act, 1988, works “generated by computer in circumstances such that there is no human author of the work” are nevertheless accorded copyright protection for fifty years from date of creation, with the copyright being held by “the person by whom the arrangements necessary for the creation of the work are undertaken”, even if there was no creative act undertaken by that person.

As I noted in an earlier blog post (The Humanity of Copyright), Thaler began his (so far) unsuccessful pursuit of US copyright registration for his professed 100% AI generated art work, “A Recent Entrance to Paradise”, back in 2018. Despite several reversals, both in the application process at the USCO, its Review Board and in the District of Columbia courts, Thaler persists in his quixotic journey. He has unsuccessfully argued various precedents for non-human copyright ownership, such as the “work for hire” doctrine and corporate copyright, although it is worth noting that humans stand behind both. He is now apparently pursuing the common law theory of “fruit of the tree” in his attempt to get the USCO to register his work. To my non-legal mind, this is the ultimate stretch.

What Thaler could do is to claim that at least part of the work is the result of his personal creative efforts. That was the USCO outcome for the graphic novel Zarya of the Dawn produced by writer Kristina Kashtanova (who identifies as “they”). The novel contained both generative artwork and human story and design elements. After initially registering Kashtanova’s work, the Copyright Office cancelled the registration after they (Kashtanova, that is) claimed it was AI produced. The Office subsequently reconsidered and granted copyright protection to the parts of the work Kashtanova had created, namely the text, and selection and arrangement of the work’s written and visual elements. That, however, is a step too far for Thaler who continues to push for recognition by the Copyright Office that a work produced exclusively by his “Creativity Machine” can be protected by copyright. That seems very unlikely to happen.

While Thaler doggedly pursues copyright registration for “A Recent Entry to Paradise” (featured on this blog post—after all, it is not copyright protected), others who have created art using AI are following a different track. One of these is Jason B. Allen, whose award-winning digital art creation “Théâtre D’Opéra Spatial” has also been denied copyright registration. Allen’s approach is the opposite to that taken by Thaler. In contrast to Thaler’s insistence that the work is a creation of AI (his AI “machine”), Allen insists that he is the source of the creative inspiration behind the work, notwithstanding that it was created by an AI algorithm, Midjourney. Allen’s pursuit of copyright registration for “Théâtre” will be the subject of my blog post next week.

© Hugh Stephens, 2024. All Rights Reserved.

Copyright in Cottage Country

Photo (c) author, 2023

For anyone who may have noticed (hopefully you did), I have not posted a blog for a couple of weeks. I am invoking that blanket summertime excuse, “I’ve been up at the cottage”. But I have not completely divorced myself from thinking about copyright issues, since they seem to pop up everywhere, even in cottage country.

For non “Central Canadians”, the term “cottage” may be a bit of mystery or even a misnomer. If you are thinking Anne Hathaway’s Tudor cottage covered with climbing roses, you are way off base. In fact, the term doesn’t seem to be used much outside of Ontario, where the term “cottage country” means any place on any lake north of Toronto. (There are a lot of them; over 250,000 lakes in Ontario). In the west, they are referred to as “cabins”, in Quebec as “chalets”—and in Russia, “dachas”, or so I am told. They can be rustic, water-access by canoe only simple abodes with outdoor plumbing or they can be multi-million homes on places like Lakes Rousseau or Joseph in the Muskokas, with every kind of cottage toy (ski boat, sea-doos, inflatable floating bouncy castle etc) imaginable. For those fortunate enough to own a cottage—and it is not a small number—many have been in the family for several generations (and will now likely attract a higher capital gains tax when they are passed on to the children, given recent budget changes. So much for the new rules affecting only 0.13 percent of Canadians). But cottage life is also embraced by new Canadians and some are even rented out (God forbid) on platforms like AirBNB.

In my case, our cottage is located in the Kawartha Highlands, Canadian Shield country between the towns of Bancroft and Haliburton. We can drive there on washboard dirt roads. There is hydro (most of the time) but very limited cell coverage. If you walk down the road, and stand on a particular rock in the evening, it usually works. We used to have a landline and dial-up Wifi but it was so slow we gave up, and then the phone company cancelled the “cottage line suspension service” we used to use in the winter. Because, you see, like many cottages, we close up for “the winter” (which generally means between Canadian Thanksgiving in early October and Victoria Day weekend in late May). The power is shut off, the pipes drained, the dock lifted and everything is closed up to await the minus 30C temperatures that sometimes arrive in Jan-Feb.

So what does this have to with ©? Well, you can’t escape it, even up here at the lake. Whether it is the $4.99 DVDs we buy at the local supermarket for those inevitable rainy days or the ubiquitous copyrighted board games (Scrabble, Clue, Cranium, you name it) that get trotted out, the © symbol is everywhere. Cottage country must be the last stand of the DVD, and a lot of low grade Hollywood movies and series that you have never heard of seem to arrive on the shelves of the local Foodland for bored cottagers. But there is no cable and no streaming, so if you want to watch something in the evening, get out the old DVDs. At our place, Bugs Bunny and Tom and Jerry cartoons, dating back to the 1950s and beyond, are still popular as a new generation discovers them, but beware of piracy.  One young visitor, having seen the scary FBI warning at the beginning of a Bugs DVD, asked what piracy was because, as she said, I don’t want to go to jail for 5 years or pay a fine of $250,000.

And then there are the card games, many of them, like cribbage, having been invented a couple of centuries ago. But the Bicycle “Official Rules of Card Games”, first copyrighted in 1887 (our edition is a relatively recent 2006) is indispensable. While the rules of a game (i.e. the mechanics) cannot be protected by copyright–although artwork and design unique to a specific game could be–Bicycle’s edition of the rules is copyrighted because of its editorial content, organization, etc.

The local paper, the Bancroft Times (founded 1894) has its own copyrighted copy. You won’t find Canadian or Associated Press articles here. You will find a blow-by-blow description of the local Council meeting where Council wrestled with the thorny problem of the arena’s deficit and whether to impose user fees on residents of outlying municipalities that were refusing to pay their share of operating expenses. This seems only fair, except the problem arises from the disincentive caused by user fees. If it stops the kids from outside town from using the arena, there won’t be enough players for the hockey league. Thorny local issues. This made me reflect on some of the big journalism issues of the day, such as the OpenAI v New York Times lawsuit (that I wrote about here) concerning OpenAI’s unlicensed use of NYT content to train its Artificial Intelligence (AI) algorithm. This made me wonder if the Times’ (the Bancroft Times, that is) coverage of Bancroft Town Council’s meeting would be scooped up by OpenAI’s AI machine. We hear that AI has an insatiable appetite for data, in fact that within a year or two there may be such a shortage of data for AI training that AI may have to produce its own synthetic data. If there is a data drought, no doubt content in the Bancroft Times will be as important to OpenAI as the content it has purloined from the New York Times.

Another manifestation of AI is the use of copyrighted artwork and photos to produce new AI-generated content. A couple of years ago, while gazing at the lake from the dock, I read about DALL E-2, one of the earlier AI programs that could produce images on demand. I created one using the scenery I was enjoying, titled “Autumn foliage, with Muskoka Chairs in the style of Monet”, to illustrate my blog on “AI and Computer-Generated Art: Its Impact on Artists and Copyright”.

Whether it is broadcasts from “The Moose” (Moose FM Radio, officially known as CHMS-FM), content from the local newspaper, the “Think Turtle Conservation Initiative” that sells signs, decals, and fridge magnets to raise awareness of turtles crossing the road, or information from the local lake association or Fish and Game Club, the integrity of content is protected everywhere by copyright—even at the cottage!

© Hugh Stephens, 2024. All Rights Reserved.

It Took Glue on Pizza to Spotlight Google’s AI Problem

Image: Shutterstock (with AI assist)

Google, the “indispensable” search engine relied on by millions for accurate and reliable search, has done it again, stepping smack into the pile of steaming excrement waiting for it in the middle of the road. Its most recent ill-starred foray into AI generated search has yielded some hilarious results, lighting up the blogosphere and making Google the butt of many jokes. After flubbing the public launch of its first AI enabled service, Bard, back in early 2023 when the AI driven search function produced the wrong results for a simple question about the James Webb Space Telescope, overnight wiping $100 million off Google’s valuation, Google’s new Gemini “AI Overview” service scored another own goal with its “hallucinatory” responses to questions like how to ensure cheese will stay on pizza (add glue) or how many rocks a day should a human eat. (Only one, in case you were wondering). It also informed users that Barack Obama was the first Muslim President of the United States.

When it comes to AI, “hallucinations” refer to incorrect or misleading results resulting from lack of training data, biased or selective training data, or incorrect assumptions made by the model. Hallucinations leading to trademark dilution was one of accusations levelled against OpenAI and Microsoft by the New York Times in its landmark copyright infringement case that is still working its way through the courts. In this case, the AI algorithm incorrectly attributed the false information to the Times, thus undermining its journalistic credibility, and diluting its trademark, or so the argument goes.

Apparently, the source of the pizza glue misinformation was an old tongue-in-cheek post on Reddit. I guess an AI algorithm has no sense of humour and can’t tell sarcasm from reality. It also gives credibility to conspiracy theories and blatantly false information, such as the Barack Obama example. Normally a search on any subject turns up a variety of sources on Google, some clearly more authoritative than others. Searchers can weigh a Wikipedia entry against a Reddit post against information from a government website or reputable academic institution. Even a plain old tendentious website put up by an advocacy organization can be probed and the bona fides of the source checked out. That is becoming more difficult, or at least less obvious, with the AI generated search summary provided by Google’s AI Overview.

If the search topic falls within AI Overview’s purview (and at the moment, not all do), viewers will see a summary of the information requested drawn from sources chosen by the algorithm. The algorithm decides how much information is drawn from any given site, and which sites are chosen. Users have the option of clicking through to access these and other sites that are displayed (below the annoying sponsored listings). However, many consumers, looking for a quick information fix, will not bother to do this and thus risk taking the AI summary as gospel. If you are being advised to mix glue into your pizza topping, you can probably figure out that something is haywire, but if the summary is only slightly wrong, or is on a subject that you are not familiar with, watch out. A good example was provided by the website Plagiarism Today. It asked Google five questions about copyright in the US. Its conclusion regarding the responses provided by AI Overview? Decidedly mixed. One A, one B, one C, one D and one resounding F.

The accuracy of the summary obviously depends on the sources of information chosen by the algorithm, and the emphasis it chooses to put on information from any given source. Unfortunately, in many instances it does not seem to prefer credible and authoritative sources, but instead goes for those that are popular. That is one of the basic problems of AI generally—quantity over quality, popularity over facts. (By the way, this account of how AI Overview works is based on reading about it from US sources since it is not yet available in Canada, which may be a good thing since I have read various US posts explaining that Overview is impossible to disable and very hard to turn off). Google intends to cram it down your throat whether you want it or not.

Of course, Google assures everyone that Overview is a “good thing” and the early kinks will be ironed out. Many websites are not happy with the new interface that will now exist between themselves and the consumer. They lose traffic when users simply read the AI summary and move on, not visiting the source website. Google used to boast that it had a symbiotic relationship with content providers because it facilitated, and even drove, eyeballs to the sites. No longer. It has appropriated–without permission–content from independent sites to feed AI Overview in the same way that the LLM (Large Language Model) AI developers have scooped up content, including copyrighted content, from rightsholders, without permission, licence or payment to enable their AI training. It is one thing to link to third party content, which requires a visit to the actual site to access the content; it is quite another to freely copy from it and mix it, sometimes inaccurately or inappropriately, with content from other websites that may not be reliable or acceptable sources of information.

Google clearly controls what goes into AI Overview and has said that it will apply more filters. If it can screen out sources of sarcasm and parody, it clearly has the capacity to install other filters that could differentiate trustworthy information from garbage. This might require Google to license the use of this curated information (Horrors! Google having to pay for the information of others that it so freely uses!). Licensing has already begun for content used for generative AI training. News Corp has just signed a licensing deal with OpenAI, as has the AP and the Financial Times. Licensing is at the heart of the dispute that OpenAI is facing with the New York Times.

Licensing presupposes knowledge of what inputs are being used, a requirement now enshrined in EU law which requires that AI developers maintain an inventory of works used for training purposes (transparency). This will allow rightsholders to opt out or negotiate a licensing solution unless the copying meets the text and data mining exception (i.e. for research by research and cultural organizations).

However, Silicon Valley has variously proclaimed that (a) it is impossible to track all the information ingested during AI training; (b) it would bankrupt the industry should they have to pay for content (c) they need to use copyrighted content because there is not enough current public domain information available (d) it is not feasible to filter out or identify specific works amongst the millions of datapoints that it ingests (e) everything that it does is fair use anyway (f) all of the above. Google’s embarrassment, and its apparent ability to finetune AI Overview, demonstrates that it is clearly feasible to filter out certain works and types of content. It is the will to do so generally that is lacking. Meanwhile, the number of lawsuits brought by rightsholders against AI developers continues to multiply.

By making itself the object of social media ridicule, and then admitting it can address the problem, Google has actually done us all a favour by highlighting the “garbage in; garbage out” problem. Not all copyrighted material is responsible or accurate but a good chunk of it is, such as professional journalism and academic journals. Access to that material is essential to provide credible results. And that material needs to be paid for, on terms set by the content owners.

The solution is not to stop the development of generative AI; for one thing, that won’t happen. It is to corral it, improve it and make it more trustworthy, if necessary with penalties if it is not. The penalties could be imposed by the market (i.e. Google search is not reliable so I will go elsewhere) or, in certain cases, by regulation. Licensing of accurate, credible information to drive search will inevitably distinguish the fake from the real and dubious from the trustworthy. This is what any credible search engine seeks. It is market gold.

Google, open your bulging wallet and start licensing content that will make us want to continue to try you first to get reliable information. Right now, through your clumsy rollout of AI supported search, you are rapidly losing that trust. It is also not acceptable to plagiarize someone else’s content, mix it with garbage from some other source, and serve it up on a platter to consumers on the pretext that this is the definitive answer. The result, as we have seen, is gluey pizza.

(c) Hugh Stephens, 2024. All rights reserved.

As Creator’s Rights Face New Challenges, Canada Needs to Keep Pace with International Developments

Image by Greg Altmann/Pixabay

This blog post appeared first in Open Canada, the journal of the Canadian International Council, on November 20, 2023.

In the past few weeks there has been a flurry of activity with respect to international regulation of Artificial Intelligence (AI) such as the Bletchley Declaration spearheaded by UK Prime Minister Rishi Sunak, the Executive Order on the Safe, Secure, and Trustworthy Development and Use of Artificial Intelligence signed by US President Biden, as well as the G7 AI Code of Conduct. Canada has been involved in some of these initiatives but will be affected whether or not it participates, including aspects of the AI issue dealing with protection of intellectual property, specifically creator’s rights protected through copyright laws. Indeed, there is a huge emerging issue related to whether the indiscriminate scraping of copyrighted content by AI developers to train their algorithms is legal under existing copyright laws, plus whether the output of content generated by AI can be legally protected under accepted copyright norms. 

When it comes to copyright challenges today, Canada has just launched a public consultation on this issue (”Copyright in the Age of Generative Artificial Intelligence”) as part of the process of updating Canada’s copyright framework in the digital age. The implications of the extent to which AI and copyright are compatible illustrate the interdependent nature of national copyright protection regimes, notwithstanding the fact that each nation interprets and applies copyright law in slightly different ways. The key point here is that countries that are too lax in protecting their creative sectors could see their copyright-protected cultural industries suffer negative economic impact; countries that are overly protective could see investment in AI innovation flow to countries with lower copyright standards. Whatever international consensus emerges and whatever framework to regulate AI is developed, whether it be OECD guidelines or a more formal arrangement, Canada will need to be a party. Such is the nature of international issues these days, with respect to digital issues, copyright and intellectual property generally. 

Canada is already bound by a number of international commitments with respect to its application of copyright, and these will undoubtedly have an impact on future and long overdue revisions to Canada’s Copyright Act. For example, the current international framework for protection of copyrighted works (books, films, music, performances, television and streaming content, broadcasts and other forms of copyright protected works such as software, paintings, drawings, photographs, etc.) is embedded in the TRIPS Agreement (WTO Agreement on Trade-Related Aspects of Intellectual Property Rights) that was a part of the establishment of the World Trade Organization in 1995. Canada, was of course, a party to TRIPS. With respect to copyright, TRIPS incorporates the terms of the Berne Convention, with the added benefit of a dispute-settlement mechanism to resolve disputes, something lacking in Berne. However, the WTO’s dispute settlement process is currently suspended owing to the inability to appoint new members to the WTO’s Appellate Body because of US opposition. The Berne Convention of 1886 (along with its periodic updates over the years) remains the bedrock of international copyright cooperation. 

When it comes to copyright legislation, it is generally accepted that the first modern copyright law is the Statute of Anne, passed by the British Parliament in 1710. The stated purpose of legislation was for “the encouragement of learning”. For the first time it gave authors (or those to whom they assigned their rights) rather than printers the exclusive right to print or reprint their books. The period of protection lasted for an initial period of fourteen years. This provision was included in the US Constitution “to promote the progress of science and useful arts…”, again with an initial period of protection of fourteen years. 

During the 18th and most of the 19th century, copyright laws were applied to nationals only of the country concerned or to works first published in a that country. Thus, British copyright applied in Britain (and the British Empire) to British authors but works by nationals of other countries could be freely reprinted in Britain unless first published there. The same applied in the US and other countries. Attempts were made to negotiate bilateral treaties under which reciprocal protection would be afforded the nationals the two countries concerned, but this resulted in a confusing patchwork quilt of protection. Eventually, under the leadership of the French novelist Victor Hugo, in 1886 the first international copyright treaty, the Berne Convention, was signed. Only eight countries ratified it initially, (Belgium, France, Germany, Great Britain, Italy, Spain, Switzerland and Tunisia). Notably absent was the United States which did not join Berne until 1989. Today, over 180 countries are members and some economies (like Taiwan, Hong Kong and Macao) that are not accessory states to Berne but are members of the WTO) also apply its terms.  

When Britain joined Berne in 1886, the Convention’s provisions applied to all parts of the British Empire, including Canada, as Canada was not at that time a fully sovereign state. Nonetheless, copyright was a power conferred on the new dominion established in 1867 and in 1868 Canada enacted its first Copyright Act. This ambiguity led to conflicts with the imperial government, and on several occasions Canada tried to exit Berne, before acceding in its own right in 1928. Canada also tried to pass legislation favouring Canadian works over those from Britain, only to have the legislation vetoed by the Governor-General. 

One of the issues was Canada’s desire to promote the Canadian printing industry. British works were protected by copyright in Canada but Canadian printers could normally not get printing rights from British publishers, despite the high cost of the imported British books. However, in the US, British works were freely printed without permission (“pirated”) because US copyright law did not protect British works. Canadian booksellers, instead of importing the expensive British editions of Charles Dickens and other British writers, would import the much cheaper pirated US editions. This was technically illegal, but the border was long and leaky. British publishers tried to pressure the British government to lean on Canada to block importation of their works printed in the US, but with limited success. 

Just as British works were not protected in the US, nor were US works protected in Britain or other countries such as Canada. Thus, Canadian printers freely reprinted Canadian editions of US works by writers such as Samuel Clemens (Mark Twain), without permission or payment of royalties. That was legal at the time but what was not legal was the resale of these Canadian editions back into the US, a frequent occurrence. Thus, Clemens famously complained about “Canadian pirates” and tried to publish some of his works first in Montreal so he could claim British and Canadian copyright. Finally in 1891, the US agreed to respect the copyrights of other nationalities but only on condition that their works be typeset in the US. 

For many years before it finally joined Berne in 1989, US publishers sought to obtain the benefits of Berne’s widespread international protection (applicable only to acceding parties) by simultaneously publishing works in the US and in Canada, through their Canadian publishing subsidiaries. This became known as “the back door to Berne”. Berne establishes a number of basic principles and commitment to minimum levels of protection by acceding states. For example, under Berne no formal registration is required to establish copyright provided that the fundamental requirements of originality, nationality and fixation are met, although registration can be provided as an option as is the case in both Canada and the United States. In the US registration is required if a legal action is taken to enforce a copyright. Berne also requires a minimum term of protection of the life of the “author” (meaning the creator of the work, even a visual work) plus 50 years after the author’s demise, although countries are free to establish a longer period of protection. The US, EU, and a number of other states, now including Canada, have extended the duration of copyright protection to “life plus seventy”, with a twenty-year extension allowing an author’s estate, or those who have acquired the rights, to have a longer period during which to exploit the work. 

A key principle under Berne is “national treatment”. That is, each country is required to apply the provisions of the Convention within its area of jurisdiction to both nationals and non-nationals on an equitable basis consistent with Berne’s minimum standards. Today Canada and the US are both parties not only to Berne but also to many of the myriad of specialized copyright treaties, such as the “Marrakesh Treaty to Facilitate Access to Published Works for Persons Who Are Blind, Visually Impaired or Otherwise Print Disabled”, which establishes exceptions to copyright for visually disabled people. 

It is not only international treaties that impose certain commitments on Canada, but also bilateral agreements such as the updated NAFTA, the CUSMA (called the USMCA in the US). One of the commitments that Canada made in CUSMA was to extend its term of copyright protection to align with that in the US. The implementing legislation for this commitment has now been enacted and the longer term of copyright protection came into effect in Canada on December 30  2022. However, the longer term applies only to works still under copyright protection. Any works that entered the public domain prior to that date do not benefit.  

Given recent Canadian government initiatives, such as Bill C-11 (the Online Streaming Act) and C-18 (Online News Act), both of which will require US companies ranging from Netflix and Disney to Facebook and Google to contribute financially to production of Canadian content, in the case of streaming, or in the case of C-18 and news content, to Canadian journalism, one could well ask whether the US will try to intervene on behalf of these large US content and tech companies. After all, one of the basic principles of CUSMA is national treatment for US companies in Canada, and vice versa, with specified exceptions. 

One of these exceptions is the so-called cultural exception, Article 32.6 of CUSMA, which relates to what could be called “copyright industries”. This article allows Canada to take actions to protect culture in ways that would be inconsistent with the Agreement but for the exception. Film and television production, as well as the publication of magazines, periodicals and newspapers, and radio, TV and cable broadcasting, are all included in the definition of a cultural industry. However, Article 32.6 has a sting in its tail. It allows the other parties (the US or Mexico) to take equivalent measures of retaliation in any sector of the economy if the cultural exception is invoked. In effect this means that if Canada uses the cultural exception to justify measures against US (or Mexican) companies, other sectors of the Canadian economy could suffer the consequences. As such, it is a poison pill, which explains why it is very improbable it will ever be used. Instead, if there is a US trade challenge to these pieces of legislation – which is unlikely given the range of US interests involved, (some of which support the legislation) – the Canadian government will not invoke the cultural exception but will argue the measures it is implementing are not aimed at US companies per se, but rather at specific commercial entities that have an excessive degree of competitive market power. Thus, there is no violation of the national treatment principle. At the present time, the only companies that fit the definition happen to be American, but in future the definition could extend to European companies or Chinese entities like TikTok or, potentially, Canadian companies.

All this just goes to show that no nation is an island, whether it concerns specialized areas like copyright or broader issues like AI harms. Copyright has adapted over the years to technological change, with AI being but the most recent example. Likewise, Canada has adjusted to the international framework regulating copyright, to its advantage and to the benefit of its creative industries, and it will need to continue to do so in future.

(c) Hugh Stephens, 2023. All Rights Reserved