Looking Back at 2024: It’s All About AI and Copyright (And a Few Other Things)

Image: Shutterstock

A retrospective on the year now coming to a close is what one expects this time of year, so I will try not to disappoint. However, when I look back at the copyright developments I wrote about in 2024, the dominant issues that jump out are AI, AI and AI. You can’t read or think about copyright without Artificial Intelligence, or to be more correct, Generative Artificial Intelligence (GAI), occupying most of the space despite many other issues on the copyright agenda. The mantra of “AI, AI and AI”, as in “Location, Location and Location” is apt because there are at least three important copyright dimensions related to AI; training of AI models; copyright protection for outputs generated by AI; and infringement of copyright by works created with or by AI. Of the three, the use of copyrighted content for AI training is the most salient.

Last year in my year-ender, I also discussed AI and the numerous lawsuits that were emerging as rightsholders pushed back on having their content vacuumed up by AI developers to train their algorithms. Those lawsuits have only multiplied. At last count, there are more that 30 cases in the US, ranging from big media vs big AI (New York Times v OpenAI/Microsoft) to class action suits brought by artists and authors, as well as litigation in the UK, EU, and now in Canada (see here and here). That is just on the input side.

In terms of output, i.e. whether works produced by an AI can be copyrighted, there are a couple of interesting cases in the US where applications for copyright registration have been refused by the US Copyright Office (USCO) because of a lack of human creativity. A couple of months ago, I discussed two such high profile cases, one brought by Stephen Thaler, and the other by Jason Allen. To date the USCO is not budging, although it is undertaking an extensive study of the issue. Part 1 of its study, on digital replicas, was published in July of this year. The next section on copyrightability is expected to be published in January with the issues of ingestion for training and licensing in Q1 2025.

While the USCO has to date denied applications for copyright registration of AI-generated works, the Canadian copyright office (CIPO-Canadian Intellectual Property Office) has been caught up in a problem of its own making. This is because Canadian copyright registration is granted automatically, so long as tombstone data and the prescribed fee is provided. The work for which registration is sought is not examined. As a result, copyright certificates have been issued to works created by AI, notwithstanding the general presumption that copyright protection is only accorded to human created work (although this is not explicitly stated in the Act). In July a legal challenge was launched against copyright registrant Ankit Sahni, who successfully registered a work with CIPO claiming an AI as co-author. The case was brought by the Canadian Internet Policy and Public Interest Clinic (CIPPIC) at the University of Ottawa, as I wrote about here. (Canadian Copyright Registration and AI-Created Works: It’s Time to Close the Loophole).

While the courts in the US, UK, Canada and elsewhere are grappling with various issues related to AI and copyright, governments are studying the issue.

In Australia, the Select Committee on Adopting Artificial Intelligence issued its final report in November. While the report was wide-ranging, three of its recommendations related to copyright;

engagement with the creative Industry to address unauthorized use of their works by AI developers and tech companies,

transparency in Training Data by requiring AI developers to disclose the use of copyrighted works in training datasets and ensure proper licensing and payment for these works, and

remuneration for AI Outputs, with an appropriate mechanism to be determined through further consultation

These are important principles, but how they will be implemented in practice remains to be determined.

In Canada, a consultation on AI and copyright was launched late in 2023 with submissions to be received by January 15, 2024. The Canadian cultural community put forth three key demands;

No weakening of copyright protection for works currently protected (i.e. no exception for text and data mining to use copyrighted works without authorization to train AI systems)

Copyright must continue to protect only works created by humans (AI generated works should not qualify)

AI developers should be required to be transparent and disclose what works have been ingested as part of the training process (transparency and disclosure).

Submissions to the consultation were published in mid-year but since then there has been no apparent action. Given the current political crisis facing the Trudeau government, none is expected in the near term although the issue will inevitably have to be addressed after the general election in 2025.

While the EU has already established some parameters dealing with use of copyrighted materials for AI training, the new UK Labour government is taking another run at the issue after various proposals in Britain to find a modus vivendi between the AI and content industries under the Tories went nowhere. The current UK discussion paper on Copyright and Artificial Intelligence, which seems excessively tilted in favour of the AI industry, has aroused plenty of controversy. While it says some of the right things, such as proclaiming that one of the objectives of the consultation is to “support…right holders’ control of their content and ability to be remunerated for its use” the thrust of the paper is to find ways to encourage the AI industry to undertake more research in the UK by establishing a more permissive regime with respect to use of copyrighted content. It is based on three self-declared principles; (notice how these things always seem to come in threes?);

Control: Right holders should have control over, and be able to license and seek remuneration for, the use of their content by AI models

Access: AI developers should be able to access and use large volumes of online content to train their models easily, lawfully and without infringing copyright, and

Transparency: The copyright framework should be clear and make sense to its users, with greater transparency about works used to train AI models, and their outputs.

These three objectives then lead to what is clearly the preferred solution;

“A data mining exception which allows right holders to reserve their rights, underpinned by supporting measures on transparency”

Fine in principle, but the devil is always in the detail and the details in this case revolve around transparency (how detailed, what form, what about content already taken?) and, in particular, reservation of rights, aka “opting out”. This is easy to proclaim in principle but difficult to do in practice. British creators are up in arms, led by artists such as Paul McCartney, and supported by the creative industries in the US. The British composer Ed Newton-Rex has penned a brilliant satire explaining how AI development in the UK will work if current proposal is enacted. The problem with an opt-out solution is essentially twofold; it doesn’t deal with content already absorbed by AI developers and it would be cumbersome if not impossible for many rightsholders to use.

Other governments have addressed the issue in different ways. Singapore has taken a very loose approach toward copyright protection, putting its thumb firmly on the scale in favour of AI developers. It is currently considering additional proposals that would strip even more protection from rights-holders, who are pushing back strongly. Japan had been widely and incorrectly reported to have been on the same path, resulting in a welcome clarification this year from the Agency for Cultural Affairs regarding the limits of Japan’s text and data mining (TDM) exception.

While AI dominated the copyright agenda in 2024, there were other issues relating to copyright and copyright industries that I wrote about. The ongoing question of payment for news content by large digital platforms continued to play out in different ways. In Canada, the struggle between the government and US tech giants Google and META was finally “resolved” (after a fashion) at the end of last year. Google agreed to “voluntarily” pay $100 million annually into a fund for Canadian journalism in return for being exempted from the Online News Act (ONA) while META called the government’s bluff by blocking Canadian news providers from its platform thus, in theory, avoiding being subject to the ONA. However, META has a very subjective interpretation as to what is Canadian news content, allowing some news providers to post to it, while many users have found workarounds, as documented by McGill’s Media Ecosystem Observatory. While the CRTC investigated, the issue is still unresolved.

Meanwhile in Australia, it seems that META intends to go down the same road of blocking news, announcing it will not renew the content deals it initially signed with Australian media in response to Australia’s News Media Bargaining Code, the model upon which Canada’s legislation was based. Unlike in Canada, the Australian government is planning a robust response. (More on this in a future blog post). Finally, on the same topic, California (which was threatening to introduce its own version of legislation to require digital platforms to compensate news content providers) emerged with an outcome very similar to that reached in Canada, with Google offering up some funding (although proportionally less than in Canada) while META appears to have walked away.

Controlled Digital Lending (CDL) was another copyright issue finally settled in 2024 (in the US). The Internet Archive, after losing a lawsuit brought against it by a consortium of publishers who argued that the digital copying of their works constituted copyright infringement, notwithstanding the Archive’s theory that they were simply lending a digital version of a legally obtained physical work held by them (or someone else associated with them), lost its appeal. In December, the deadline for further appeals expired, thus effectively ending this saga. Whether Canadian university libraries, some of whom are avid devotees of CDL, will take note remains to be seen.

The issue of circumventing a TPM (“Technological Protection Measure”), commonly referred to as a “digital lock” and often represented by a password allowing access to content behind a paywall, was also front and centre this year in Canada. In the case of Blacklock’s Reporter v Attorney General for Canada, the Federal Court found that an employee of Parks Canada, who shared a single subscription to Blacklock’s with a number of other employees by providing them with the password did not infringe Blacklock’s copyright since the employee did not circumvent (in the meaning of the law) the TPM and the purpose of the sharing was for “research“, which is a specified fair dealing purpose. Blacklock’s is a digital research service that sells access to its content and protects its content with a paywall, as is common for many online content providers, like magazines and newspapers.

Despite the hoo-ha of anti-copyright commentators asserting the Court had found that “digital lock rules do not trump fair dealing“, it was equally clear the Court had ruled that fair dealing does not trump digital locks (TPMs). The Court did not undermine the protection afforded to businesses to protect their content through use of TPMs. Rather, it determined that sharing a licitly obtained password did not constitute circumvention as outlined in the Act, as I explained here. (Fair Dealing, Passwords and Technological Protection Measures (TPMs) in Canada: Federal Court Confirms Fair Dealing Does Not Trump TPMs (Digital Lock Rules). Although the Court did not legitimize circumvention of a TPM for fair dealing purposes, contrary to claims stating the opposite, its acceptance of password sharing is an outcome that legal experts have disagreed with, (as do I for what it is worth). The law is very clear that fair dealing cannot be used as a pretext or a defence against violation of the anti-circumvention provisions of the Copyright Act. The decision now under appeal by Blacklock’s.

Finally, the last copyright point of note for 2024 is that this year marked the bicentenary of the introduction of the first copyright legislation in Canada, in the Assembly of Lower Canada, in 1824. It also marked the centenary of the entry in force of the first truly Canadian Copyright Act on January 1, 1924. This two hundred years of domestic copyright history is worth celebrating. The first legislation was introduced “for the Encouragement of Learning” so that more local school texts would be written and printed. Given the current standoff between the secondary and post-secondary educational establishment and Canadian authors and their copyright collective over license payments for use of copyrighted works in teaching, one wonders whether we have really learned anything about the role copyright plays in our society. (Copyright and Education in Canada: Have We Learned Nothing in the Past Two Centuries? (From the “Encouragement of Learning” to the “Great Education Free Ride”).

Leaving that question with you to ponder, gentle Reader, is probably a good way to end this look back over the past 12 months. Stay tuned for more commentary on copyright developments in 2025.

© Hugh Stephens, 2024. All Rights Reserved.

Government Announces an Artist’s Resale Right Will be Finally Coming to Canada!……Well, Maybe but Don’t Count On It.

Image: Author

Canada’s long-awaited 2024 Fall Economic Statement, reporting on the country’s finances for 2023-24 and outlining future spending and legislative priorities, was tabled in the House of Commons on December 16. Normally the big news would be that the budget deficit hit almost $62 billion, a 50% increase of almost $22 billion over the projected deficit announced in the last budget just over six months ago, but all that was upstaged by the announced resignation of Finance Minister Chrystia Freeland the morning of the day she was to deliver the Statement. This has led to a crisis within the Liberal caucus as to whether Prime Minister Justin Trudeau should himself resign. Buried in the 270 page document, which outlined some $23 billion in planned new spending just months after the April 2024 budget proposed over $50 billion in other spending initiatives, was a tiny little bone for the copyright community. On p. 140, we find this commitment:

Protecting Artists’ and Creators’ Copyrights

Artists, particularly visual artists, are among the lowest income earners in Canada despite their significant cultural contributions. An Artist’s Resale Right provides the creators of original visual artwork with a royalty whenever their work is resold through an eligible sale, offering an additional income stream.
In the 2024 Fall Economic Statement, the government announces its intent to amend the Copyright Act to create an Artist’s Resale Right in Canada, ensuring Canadian visual artists benefit from future sales of their work.”

CARFAC, the non-profit that represents visual artists in Canada (along with its Quebec counterpart, RAAV,) was quick to publicize and celebrate the announcement. And so it should. It is important to get support for the Artist’s Resale Right (ARR) on the record, in the hope that this time the promise will actually be actioned. I have written about the ARR in the past, in 2021 and 2022, here (The Artists’ Resale Right: A Matter of Simple Fairness) and here (Will the “Artists’ Resale Right” Come to Canada and the US?). It is not a new idea to bring it to Canada. Artists have been campaigning for it for more than two decades. It exists in many countries (although not in the US) and has been around for more than a century, being first initiated in France in 1920. Today it exists in over 90 countries, including all members of the EU, Britain, Australia, Mexico and a number of African states. An important factor is that it is applied reciprocally, and is fully consistent with the Berne Copyright Convention.

The announcement on December 16 makes it appear that enactment of the ARR in Canada is tantalizingly close, but it is worth remembering that after the last general election in 2021, the mandate letter for the Minister for Innovation, Science and Industry, who holds lead legal responsibility for amending the Copyright Act, included the following instruction; “Work with the Minister of Canadian Heritage to amend the Copyright Act to further protect artists, creators and copyright holders, including to allow resale rights for artists.” That was three years ago and since then absolutely nothing has happened to implement this instruction. In fact, the only action to date has been the restatement this month of the government’s continued intention to do so. But time is fast running out for the Liberals– and consequently on any real prospect that the Copyright Act will be amended before the current Trudeau minority government falls, resulting in a general election. To say that the governing Liberal Party is not expected to win would be the understatement of the year.

Before speculating further on the prospects of the ARR actually coming to Canada any time soon, it is worth a quick refresher on what it is. CARFAC’s press statements describes the ARR as “a royalty that allows artists to share in the wealth they generate in the marketplace”. It is not known how exactly it would be implemented in Canada. While most Resale Right regimes are similar, they are not identical although the basic premise is that where sales of artistic works (works of graphic or plastic art such as pictures, collages, paintings, drawings, engravings, prints, lithographs, sculptures, tapestries, ceramics, glassware and photographs) take place beyond the initial sale, a small proportion of the re-sale price is remitted to the original artist or their estate, with post-mortem payments limited to a specified number of years. Often there is a sliding scale for payments, with the percentage going to the artist decreasing as value increases. Sometimes there is a ceiling beyond which a resale royalty is not levied. There can also be a ceiling on the amount paid. Private sales are usually excluded; the ARR applies only to works sold through galleries or auction houses. You can guess who might be opposed to it.

CARFAC’s proposal is for an ARR to be applied to secondary sales of $1000 or more that are conducted by a dealer or auction house, at a flat rate of 5%, with the liability to pay shared between the seller and the dealer, as is currently the case in the UK. It would apply only to Canadian artists and to artists of countries that offer an ARR (thus the reciprocity angle). Works of Canadian artists resold in other countries offering an ARR would be eligible for ARR payments from galleries or auction houses in those countries. Payment would be collected through a copyright collective, Copyright Visual Arts, owned jointly by CARFAC and RAAV. Notably, since the US does not have an ARR, works of US artists resold in Canada would not be eligible.

However, for all this to happen, the current Trudeau government will have to get its act together to introduce and pass the legislation, something that seems unlikely given the dilatory approach of the past three years plus the current existential crisis facing this government. The government has mused about copyright reform but has not walked the talk. Now it has almost run out of runway, and seems to be sleep-walking over a cliff. Public opinion polls put the party and the Prime Minister at all time lows, around 20% support, and a large number of the Liberal caucus has spoken up to urge Trudeau to resign. As a minority government, it must be supported in a confidence vote by at least one other party, and that support seems increasingly tenuous. In a best-case scenario for the Liberals, the government could last until October 2025 when there is a statutory requirement for a general election, but its demise is likely to come months before that.

Where is there room to introduce ARR amendments to the Copyright Act in the midst of all this uncertainty? Such amendments would have to be reviewed in Committee, given the guaranteed opposition of the art dealers. With the Trudeau government’s fate hanging by a string, not to mention the increasingly strident threats to impose punishing tariffs on Canadian exports to the US coming from the Truth Social account of Donald Trump, dealing with this bit of legislative housekeeping seems most unlikely.

That is not to say that amendments to the Copyright Act have not been introduced quickly in the past. The extension to Canada’s term of copyright protection required by the CUSMA/USMCA agreement was quietly and quickly enacted to meet the deadline of December 31, 2022 and back in the Stephen Harper era an amendment was quickly passed to extend copyright protection on sound recordings by twenty years from date of release. The big difference is that Harper held a strong majority in Parliament at the time, and the amendment was relatively noncontroversial. However, to think that the current limping government will expend much political capital to introduce an ARR in Canada at this time is most unlikely. As for the apparent “government-in-waiting” of Pierre Polievre’s Conservatives, as far as I know they have not pronounced on the issue. It is unlikely to be a priority, insofar as they claim to have so many other things to do from ending the so-called “carbon tax” (aka price on carbon) to defunding the CBC.

One element in favour of an ARR is its “reconciliation” and social justice impacts. It is supported by Indigenous artists and organizations representing them, such as the Inuit Tapiriit Kanatami and the Nunavut Arts and Crafts Association. Often cited is the case of the noted Inuit artist Kenojuak Ashevak, whose famous graphic work “Enchanted Owl” (the image on this blog) was initially sold by her for the princely sum of $24 in 1960. That’s all she got in her lifetime although the original work was sold and resold for many multiples of that both in her lifetime (she died in 2013) and after.

If Canada ever gets around to introducing an ARR, will the US follow? I suppose it is a possibility. Since Donald Trump seems to be making policy daily by tweet on Truth Social, I guess we will have to stay tuned. Maybe someone could float the idea with Elon? In the meantime, don’t hold your breath for the ARR to come to Canada within the life of the current Trudeau government, tantalizing as the recent announcement is. I wish it were otherwise. And I would love to be proven wrong.

© Hugh Stephens, 2024. All Rights Reserved.

CanLII v CasewayAI: Defendant Trots Out AI Industry’s Misinformation and Scare Tactics (But Don’t Panic, Canada)

Image: Pixabay

Last month I highlighted the first AI/Copyright case in Canada to reach the courts, CanLII v CasewayAI. CanLII, (the Canadian Legal Information Institute), a non-profit established in 2001 by the Federation of Law Societies of Canada, sued Caseway AI, a self-described AI-driven legal research service, for copyright infringement and for violating CanLII’s Terms of Use through a massive downloading of 3.5 million files which Caseway allegedly used to populate its AI based services. Now the principal of CasewayAI, Alistair Vigier, through an article (Don’t Scare AI Companies Away, Canada – They’re Building the Future) published in Techcouver, has responded publicly by trotting out many of the tired and specious arguments put forward by the AI industry to justify the unauthorized “taking” of copyrighted content to use in or to train generative AI models. Let’s have a closer look at these arguments.

Vigier opens by referencing another AI/Copyright case in Canada where a consortium of Canadian media companies is suing OpenAI for copyright infringement. He claims this is all based on a misunderstanding of how AI training works, stating that “AI systems like OpenAI rely on publicly available data to learn and improve. This does not equate to stealing content.” Whether data is “publicly available” or not is irrelevant when it comes to determining whether copyright infringement (aka stealing content) is concerned. Books in libraries are publicly available, or so is a book that you purchase in a bookstore, or content on the internet that is not behind a paywall. (It is worth noting that the Canadian media companies also claim that OpenAI circumvented their paywalls to access their content when copying it). But in none of these cases is copying permitted unless the copying falls within a fair dealing exception, which is very precise in its definition. Labelling copied material as “publicly available” is a red herring.

Vigier’s next argument is to equate the ingestion of content by various AI development models with a human being reading a book. We know that humans enhance their knowledge through reading and are thus able, presumably, to better reason based on the content they have absorbed. Vigier says, “This is how AI works. The AI “reads” as much as it can, gets really “smart,” and then explains what it knows when you ask it a question. Like a human learns from reading the news, so does an AI.

Really? A human does not make a copy, not even a temporary copy, of the content although some elements of the content are no doubt retained in the human brain. But AI operates differently. It makes a copy of the content. This should be beyond dispute although the AI industry continues to muddy the waters by claiming that when content is “ingested” it is converted to numeric data and is thus not actually copied. This is a fallacious argument. Just because the form changes, this does not mean there is no reproduction. When you make a digital copy of a book, there is still reproduction even though the digital form is different from the original hard copy version. When a work is converted to data, the content is still represented in the dataset.

Vigier dubiously states, with regard to OpenAI, “OpenAI’s models do not reproduce articles verbatim; they process vast datasets to identify patterns, enabling insights and efficiency.” Apart from the fact that the New York Times in its separate lawsuit in the US has been able to demonstrate that by typing in leads of articles, it can prompt OpenAI to reproduce verbatim the rest of the article (OpenAI claimed that the Times “tricked” the algorithm), copying is copying even if the result of the copying is somewhat different from the original. The Copyright Act is crystal clear on this point. Section 3 (1) of the Act states that, “For the purposes of this Act, copyright, in relation to a work, means the sole right to produce or reproduce the work or any substantial part thereof in any material form whatever…. If copyright protected content is reproduced in its entirety without permission for a commercial purpose (eg for AI training), that is infringement, unless the use qualifies as a fair dealing under Canadian law or fair use in the US.

The issue of whether ingestion of content to train an AI application results in copying (reproduction) has been carefully studied and documented. One of the most thorough examples is a recent SSRN (Social Science Research Network) paper, entitled, “The Heart of the Matter: Copyright, AI Training, and LLMs” with noted scholar Daniel Gervais (a Canadian by the way) of Vanderbilt University as lead author. The article goes into a detailed discussion on how copying of content occurs during AI scraping to build a Large Language Model (LLM), including the stages of tokenization, embedding, leading to reward modelling and reinforcement learning. The section of the article explaining how copying occurs (pp. 1-6) is dense, technical text but the conclusion is clear, “LLMs make copies of the documents on which they are trained, and this copying takes various forms, and as a result, with appropriate prompting, applications that use the LLMs are able to reproduce original works.” A shorter (and earlier) version explaining how the LLM copyright process works can be found in this article (“Heart of the Matter: Demystifying Copying in the Training of LLMs“), produced by the Copyright Clearance Center in the US. It is also worth noting that these explanations refer only to ingestion of text. AI models that train on images and music are even more likely to produce exact or close-to-exact reproductions of some of the works they have been built and trained on.

So much for the misinformation in Vigier’s article. Now to the scare tactics. He says that the recent Canadian media lawsuit against OpenAI sends a negative message to innovators that Canada may not be open to AI development.

If Canada wishes to remain relevant in this (AI) sector, it must balance protecting intellectual property and promoting technological progress.

The fact that there are currently more than 30 lawsuits in the US, including the seminal New York Times v OpenAI case, does not seem to have slowed down the AI companies in the US. In the UK, legislation has been introduced that would, according to British media reports, “ensure that operators of web crawlers (internet bots that copy content to train GAI, generative AI) and GAI firms themselves comply with existing UK copyright law. These amendments would provide creators with crucial transparency regarding how their content is copied and used, ensuring tech firms are held to account in cases of copyright infringement.” There is lots of AI innovation ongoing in Britain.

The Australian Senate Select Committee Report on Adopting AI has recommended, among other findings, that there be mandatory transparency requirements and compensation mechanisms for rightsholders. The EU is already way out in front on this issue. Its new AI Act stipulates that providers of AI generative models will be required to provide a detailed summary of content used for training in a way that allows rightsholders to exercise and enforce their rights under EU law. Even India now has its own version of the US and Canadian media cases against OpenAI. (OpenAI’s defence in part is based on the argument that no copying took place in India because no OpenAI servers are located there!)

If that is what the “competition” is doing, who does Vigier cite as being the jurisdictions most likely to attract innovators away from Canada? Why, it is those AI powerhouses of Switzerland, Dubai—and the Bahamas!

The argument that if legislators and the courts don’t give AI innovators a free pass on helping themselves to copyrighted content for AI training purposes, this will either slow down innovation or chase it elsewhere is a common fearmongering strategy of the AI industry. This is a race-to-the-bottom mentality whereby content industries are thrown under the AI bus. Vigier, having been the subject of his own lawsuit, argues that instead of resorting to litigation, the Canadian media companies should have sought a licensing solution. But the fact that no licensing agreement was reached with OpenAI is undoubtedly the reason for the lawsuit in the first place. That is certainly the reason behind the NYT v OpenAI lawsuit in the US; licensing negotiations broke down. If someone has taken your content without authorization, and then offers you pennies on the dollar in comparison to what that content is actually worth, then the stage for a lawsuit is set.

In explaining CasewayAI’s position in the litigation brought by CanLII, Vigier says that Caseway approached CanLII with an offer to collaborate but was rebuffed. As a result they developed other extensive web crawling technology that pulled the needed material from elsewhere. (Where exactly the material was downloaded from is the crux of the matter). Regardless, this makes it sound as if it was CanLII’s fault for refusing to share their content. Surely a rightsholder has the right to determine the terms on which their content is to be shared with others, if at all.

The fact that Caseway went to CanLII in the first place suggests that CanLII had developed the content that Caseway wanted. Caseway claims the material it accessed was on the public record, such as court documents and decisions. CanLII, on the other hand, claims that it had reviewed, indexed, analyzed, curated and otherwise enhanced the content in question, thus adding a wrapping of copyright protection to what otherwise would be public documents. Who is right, and whether the material was scraped from CanLII’s website without authorization, will be determined by the BC Supreme Court.

If the material taken by CasewayAI was not copyright protected, they are in the clear, at least with respect to copyright infringement. That is quite different, however, from arguing that no copying takes place during AI training or that if rightsholders use the courts to protect their rights, Canada will be a laggard when it comes to AI development. Robust AI development needs to go hand in hand with robust copyright protection for creators, with an appropriate sharing of the spoils of the new wealth generated from the creative work of authors, artists, musicians and other rightsholders. To say, as Vigier does in his concluding paragraph that;

Canada has a choice to make. Will we embrace AI as the transformative force it is, or will we let fear and litigation stifle innovation? The lawsuits against Caseway and OpenAI message tech companies: you’re not welcome here. If this continues, Canada won’t just lose its AI startups; it will lose the future of job creation.

What sheer self-interested nonsense!. This is fearmongering of the worst kind, based on an inaccurate and misinformed knowledge of how AI is developed and trained, that moreover impugns the legitimate right of a rightsholder to seek the protection of the law to protect their creativity and investment in content. Vigier might be correct when he says that licensing of content is a win/win for both parties. I agree with that. But licensing negotiations are about money and conditions of use and require willing parties on both sides. When licensing discussions break down, or when one party decides to do an end run on licensing because they have been rebuffed, then the way to gain clarity is through the courts whose job it is to interpret what the legislation means.

Canada still needs to come to grips with the question of how copyrighted content will interface with AI development. As I noted earlier, both sides in the debate made their cases in the public consultation launched a year ago, but since then there has been no movement in Ottawa. The law could be strengthened to ensure adequate protection of rightsholder interests in an age of AI, resulting in facilitating licensing solutions. In the meantime, misinformation and scare tactics need to be called out for what they are.

Adequate protection for rightsholders does not mean the end of AI innovation or investment in Canada. There is no need for panic. We can walk and chew gum at the same time.

© Hugh Stephens, 2024. All Rights Reserved.

India’s Intrusive and Overkill Tobacco Warnings for Streaming Film Content

Image: Shutterstock

The issue of how to handle smoking in films continues to be a real hot potato in many countries. As the film industry focuses on its responsibility to clearly label content that depicts smoking while avoiding any promotion of smoking in films, health authorities—in their understandable desire to curtail tobacco consumption—sometimes overzealously impose unrealistic conditions in pursuit of public health ends. It can be tricky to strike the right balance, ensuring that smoking is not promoted in films while avoiding undue censorship where some depiction of smoking is a necessary part of the storytelling. Likewise, anti-smoking warnings are a legitimate requirement—unless they are enforced through unrealistic regulations that not only interrupt the flow of a plot but require bespoke country-by-country editing to insert warnings into the narrative of the film. Saying this, I am fully aware that the tobacco industry will do just about anything to promote smoking and try to gain new consumers for its noxious products. (The marketing of candy flavoured nicotine packs in corner stores near schools in some countries is a good example, if one was needed). Various countries are grappling with this dilemma; protecting public health while ensuring that health regulations don’t inflict unintended collateral damage. India is no exception.

The current controversy in India is about new draft anti-smoking regulations for film content announced by the Ministry of Health and Family Welfare in September. The new regulations would require that online streaming services (Video on Demand), or what is known as “online curated content” (OCC) in India (think Netflix, Hotstar, Amazon Prime, SonyLIV etc), display 30 second non-skippable anti-smoking health spots at the start of—and in the middle of—any film that depicts smoking. In seeking to impose these requirements in such a heavy-handed and frankly unrealistic way, the Health Ministry managed to step clumsily on the toes of the Ministry of Information and Broadcasting (MIB), which is responsible for regulation of broadcasting. The mandatory insertion of health spots like this is both unnecessary and impractical, apart from being a questionable intrusion into MIB’s domain.

In addition to the requirement for tobacco warning spots at the beginning and midway point of films where smoking takes place, Indian media reports the regulations also require the insertion of a 20 second audio-visual disclaimer notice warning about the dangers of tobacco use, to be displayed before the film rolls. In scenes where smoking or tobacco consumption is depicted, static anti-tobacco warnings must be displayed throughout. This is a classic case of repetitive overkill. The editing and physical requirements to insert all these warning into the streaming content is no small matter. Existing global catalogue will need to be reviewed and re-edited if the content is to be shown in India. Of course nothing is impossible, even if impractical, but the result will likely be that if the insertion requirement is maintained, many films will not be shown in India. The cost of compliance will be just too great to justify the benefit of providing that particular film. Responsible members of the Indian viewing public will be the losers; others will resort to accessing pirated content to view the movies they want to see. Piracy in India is already a major issue. This proposal has the potential to supercharge it.

This sledgehammer approach is excessive and certainly not needed. Films on OCC platforms already contain warnings about depiction of smoking (as well as other relevant warnings regarding the nature of the content to be shown). Moreover, online curated content (OCC) is different from content broadcast over-the-air or distributed by cable or satellite. It is “on demand” content that is “pulled” by viewers from a menu rather than being “pushed” by broadcasters. As a result, there is a much greater element of choice and judgement on the part of the consumer. No one is going to be surprised by accidentally tripping over a film where the heroine lights up a fag or the erudite detective puffs away on a pipe. Nor is this likely to make them rush out and stock up on bidis. Viewers know in advance what they are going to watch and choose appropriately labelled and classified content accordingly.

Indeed, the “rub your nose in it” repetitive flashing of anti-smoking warnings runs the risk of turning off OCC consumers, who will tune out messages of this nature as they find themselves constantly bombarded with anti-smoking messages as they scroll for what they want to watch. For customers who routinely browse content across several streaming platforms to find their viewing preference, this could well turn them off the streaming experience altogether, just as streaming is starting to get traction in India. It is far better to deliver the message through categorization and clear labelling of content so that viewers are aware of what they will be exposed to—and are prepared to make judgements accordingly.

There is no question that smoking needs to be combatted, and that India has a smoking problem. A 2015 study identified India as the world’s second largest market for tobacco products (after China) with a smoking rate of 48% among males and 20% among females. India has already taken steps to address this problem. It was one of the first countries to ratify the World Health Organization’s Framework Convention on Tobacco Control, in 2003.

The Convention calls for a range of measures to combat smoking, including prohibiting advertising for tobacco products that is false, misleading, or deceptive and by requiring health warnings on tobacco advertising, promotion and sponsorships. It also requires restrictions on incentives to purchase tobacco and prohibition, if legally possible, of tobacco sponsorship of international events and activities. Guidelines attached to the Framework Convention suggest the elements of a comprehensive ban on tobacco advertising should include “implementing a ratings or classification system that takes tobacco depictions into account.” That is what international streamers and OCC providers in India are committed to and already provide.

Indian consumers of streaming entertainment content need to be treated as adults who, having been notified of what they are about to watch, can make informed judgements without having their viewing experience interrupted with audio visual inserts before and during the film while static messages distract their attention. There is no need for the Health Ministry to drag out its big cannons to blow holes into carefully curated and suitably labelled content, trampling on the jurisdiction of the Ministry of record in the process while creating havoc in the film industry to the detriment of creators and viewers alike. By all means combat the public health scourge of smoking, but do it in a responsible, proportionate and balanced way.

© Hugh Stephens, 2024. All Rights Reserved.

Another AI Scraping Copyright Case in Canada: News Media Companies Sue OpenAI

Image: Shutterstock (AI assisted)

First, I heard it on the radio. The word “copyright” caught my attention because that’s a word seldom heard on the morning news. Then the news stories started to appear, first on Canadian Press, which was “largely” accurate, then on the CBC, Globe and Mail, even the New York Times. A consortium of Canadian media, including the Toronto Star, Postmedia, the Globe and Mail and the CBC/Radio-Canada is suing OpenAI in Ontario Superior Court for copyright infringement and for violating their Terms of Use. The publishers are seeking CAD20,000 per infringement plus an injunction to prevent further infringement. The case largely parallels a similar one in the US brought by the New York Times against OpenAI and its largest investor Microsoft, which I wrote about earlier this year (When Giants Wrestle, the Earth Moves (NYT v OpenAI/Microsoft).

Despite what the press articles state, this is not the first case in Canada where copyright infringement has been alleged as a result of data being scraped to use in AI applications, as I noted last week. However, it is the first case where news organizations have gone after an AI development company. It also has nothing to do with the Online News Act as stated in the Canadian Press report. In fact, it is the absence of legislation in Canada regarding copyright and AI that is partly responsible for this being fought out in the courts.

OpenAI in its statement quoted “fair use” and “related international copyright principles” to justify its behaviour. The fact that the US fair use doctrine does not apply in Canada, combined with the closed nature of fair dealing exceptions, and the lack of a Text and Data Mining exception in Canadian law, could prove troublesome for OpenAI. It also has the effrontery to state that it offers “opt out” options for news publishers. When you are taking someone’s proprietorial content without permission or payment, it is an insult to tell them they can always opt out. To steal, and then to tell your victim to request that you not steal again, is hardly the way ethical companies operate.

One question to be decided is whether the scraped content falls under copyright as it is a well-established principle that the “news of the day” is not subject to copyright protection. See (Do News Publishers “Own” the News?) News media may not have a monopoly over reporting on what is happening in, say, Gaza but they certainly have the rights to their expression of what is happening through their coverage. OpenAI has also apparently said that its web crawlers are just “reading” publicly available material, as a human being would do. However, reading and copying are two different things, although proving reproduction may be difficult given the unwillingness of OpenAI to disclose its training methods, an issue that has come up in the New York Times case. “Publicly available” is irrelevant, since being publicly available on the internet, or in a library, or anywhere else, does not justify copyright infringement.

In their suit, the plaintiffs are also alleging circumvention of a TPM (technological protection measure, sometimes referred to as a digital lock, which puts content behind a paywall). This is a separate violation of the Copyright Act. In addition, they are alleging violation of their Terms of Use, which are linked to their websites. When a user accesses material on the publishers’ websites, they must agree to the Terms of Use which, among other things, state that the content to be accessed is for the “personal, non-commercial use of individual users only, and may not be reproduced or used other than as permitted under the Terms of Use”, unless consent is given.

The publishers state that OpenAI was well aware of the need to pay for their content and to obtain permission to use it. That is essentially the position also taken by the New York Times. OpenAI has reached licensing agreements with some publishers including the Associated Press, Axel Springer (Business Insider, Politico), the Financial Times, the publishers of People, Better Homes and Gardens and other titles, News Corp (Wall Street Journal and many others), The Atlantic, and others. But not the New York Times obviously (negotiations broke down, leading to the current lawsuit) and not with any of the Canadian media bringing suit. A licensing agreement acceptable to both parties will be the likely outcome of this case. As the US-based Copyright Alliance has pointed out, generative AI licensing isn’t just possible, it’s essential.

There is a vacuum when it comes to legislation in Canada, and elsewhere, regarding the intersection of copyright and AI development. Various models are being experimented with, from the “throw copyright under the bus” model in Singapore to a more nuanced model in Japan, to uncertainty elsewhere. Australia has just produced a Senate report in response to its public consultation on the issue. Among its recommendatons, the Select Committee Report on Adopting Artificial Intelligence called for changes that would ensure copyright holders are compensated for use of their material, while tech firms would be forced to reveal what copyrighted works they used to train their AI models. Canada initiated a public consultation on the topic last year and the Australian Committee’s recommendations with respect to copyrighted content are essentially what the Canadian copyright community asked for. However, since receiving input in January of this year and publishing the submissions received in June, there has been no further information released by the Canadian government. A conclusion similar to the recommendations in Australia would be welcome.

Canadian creators and rightsholders are waiting for some action. Meanwhile the only alternative is to toss the issue to the courts to adjudicate.

(c) Hugh Stephens, 2024. All Rights Reserved.