Warning: this blog post has a long opening sentence, so take a deep breath.
I know it is sheer coincidence–since my blog has zero influence over Canada’s copyright policy and the bureaucrats who beaver away in the bowels of the ISED and Heritage Departments working on copyright issues–but I couldn’t help but take a smidgeon of satisfaction from the fact that just a couple of weeks after I published a critical comparison of the activist approach taken by the US Copyright Office on AI and copyright issues, compared to the passive non-response from Canada’s copyright policy makers (A Tale of Two Copyrights, wherein I described the Canadian approach as being one of confusion and ambiguity), the department with statutory responsibility for copyright policy (ISED, aka Innovation, Science and Economic Development Canada) released a consultation paper on this very issue. Titled “Consultation on Copyright in the Age of Generative Artificial Intelligence“, it came out on October 12. Not that the copyright mavens weren’t aware of the importance of the issue earlier. In fact, the National Post, (“Unclear how Canadian copyright law applies to generative AI: government document”), quoting from an Department of Canadian Heritage document prepared back in May retrieved through the Access to Information process, revealed that officials provided an assessment on the subject for then Heritage minister Pablo Rodriguez several months ago.
In that assessment, the briefing note apparently stated, “Rights holders argue the use of unlicensed copyright-protected works in the training of AI models infringes their copyright, and that regulation is needed”, while, on the other hand, “Many industry stakeholders wanting to promote innovation and technology will oppose any regulation of AI. Instead, they seek clearer exceptions in the Copyright Act to define and expand the permitted uses of copyright-protected materials.” Does regulation–of AI or anything else–impede promotion of innovation and technology? The AI sector is crying out for regulation given the abuses that are already becoming apparent. Casting copyright as antithetical to innovation is part of the false narrative often put forward by the hi-tech sector. Have those who would freely scrape the internet of all content, copyrighted or not and accurate or not, all in the name of “innovation”, ever heard of licensing? That is what many content creators are now insisting on. First, they have to block the stealthy web crawlers, which is what the BBC and New York Times have done. Meanwhile Getty Images is suing StabilityAI for having taken its content without permission. Once the free ride has ended, licensing models can then be developed. This is already beginning to happen. In July of this year the Associated Press announced a two-year licensing deal with OpenAI, ChatGPT’s parent company. Others, such as the Guardian, Axel Springer, and News Corp are reportedly in licensing negotiations with leading AI development companies.
To be fair to the bureaucrats at Canadian Heritage, perhaps they were not saying that copyright impedes innovation, but were only paraphrasing the position of many in the tech sector. The Heritage briefing note also discussed whether or not AI generated works should be eligible for copyright protection. As I noted in the Two Copyrights blog, the US Copyright Office has insisted on human creativity as an essential element for a work to be granted copyright protection. Canada has not taken a clear position on this (although it is implicit), yet the Canadian Intellectual Property Office (CIPO) has issued copyright certificates for AI-generated works. (This fact, and the blog post I wrote about it, was referred to in the current ISED discussion paper at Footnote 50. I guess some of the mavens do read the blog. Thanks mavens).
Ironically, while the tech sector has no hesitation about freely ingesting copyrighted content to create “new” AI-generated works (which often compete with the original work in the marketplace), it wants to eat its cake and have it too. Notwithstanding its position that taking copyrighted content without permission to produce AI generated outcomes is legitimate (and if not, it should be), tech companies producing AI generated content still want that content to enjoy all the benefits of copyright protection. If not, the output of content developed by AI could be freely used and copied with no ROI for the content creators, and thus no economic incentive to produce it or use it.
Moreover, producing content with a heavy element of AI generated content that may have been trained via copyright infringement is drinking from a potentially poisoned chalice. This is a key issue that needs to be dealt with in any attempt at regulation and is one of the questions addressed in the current ISED discussion paper. To show how quickly things are changing, back in September of last year, Getty Images announced it would ban from its repertoire all images generated by AI in order to protect its users. A year later, in September 2023, Getty launched its “commercially safe” generative AI offering, built exclusively on licensed content. Content providers are compensated for any inclusion of their content in the training set and subscribers using Getty content, including Getty’s AI-generated content, are fully indemnified against lawsuits by Getty. That’s the value of a licensing solution.
But back to the consultation paper just issued. It builds on an earlier discussion paper issued in 2021 seeking input on “A Modern Copyright Framework for Artificial Intelligence and the Internet of Things”. The government received about 40 responses at the time, from groups and individuals representing the creative community as well as the tech industry. Nothing happened, and the world has moved on. Now, we have a second consultation paper that addresses three principal questions;
Text and data mining – i.e., whether any clarification is needed on how the copyright framework applies to the use of copyright-protected works and other subject matter (e.g., a performance or sound recording) in the training of AI systems;
Authorship and ownership of works generated by AI – i.e., how the copyright framework should apply to AI-assisted and AI-generated works; and
Infringement and liability regarding AI – e.g., who are the persons liable when AI-generated works infringe copyright-protected works.
Let’s look at these three questions. Canada does not have a text and data mining (TDM) exception in its copyright law, unlike a number of other countries. In my view, this is something that should be addressed although an approach that would protect creators by limiting a TDM exception to non-commercial research purposes, as in the EU, will be attacked by the tech industries as too little, too late. On the authorship issue, the blending of a requirement for human creativity with the ability to use AI to create, and defining the boundaries between AI-assisted versus AI-created works will not be easy. Finally, on infringement and liability, an important element will be documentation of inputs, something that AI development companies are loath to do. Getty’s approach of verifying legitimate inputs in order to be able to certify the legality of outputs is a good example of the need for transparency.
Responses are sought by December 4, via online consultation form. Then what? This is not an area where rapidity of action is rewarded as it is all too easy to get it wrong, given the rapid pace of adoption of AI, plus the various court cases underway (although not in Canada). On the other hand, the more one dithers, the more the regulatory regime becomes outdated and no longer fit for purpose. It is not an enviable position to be in. The consultation paper will have the virtue of buying time while seeking yet more input from all players. I am glad to see that Canada finally “got the memo” and is taking some action. If you have views, weigh in.
© Hugh Stephens 2023. All Rights Reserved.