The Humanity of Copyright

Source: pixabay (modified)

Recently the US Copyright Office (USCO) reaffirmed its longstanding position that human authorship is a pre-condition for recognition of copyright, despite being pushed hard by a particular protagonist within the tech community to accept artificial intelligence (AI) in the form of a machine (NB his machine) or software as a creator with rights. Stephen Thaler, PhD (identified by some sources as Steven Thaler) is, according to this website, the CEO of Imagination Engines Incorporated. His LinkedIn profile says that he is “a pioneer in the area of artificial intelligence and the inventor of the Creativity Machine(R) Paradigm, US Patent 5,659,666 and its derivatives.” Apparently, this machine has been used “to design a wealth of commercially available products”. It must be quite a machine because apparently it also produces autonomous art. This art, a two-dimensional image entitled “A Recent Entrance to Paradise”, was the subject of the USCO Review Board’s recent decision, which ended a three-year struggle by Thaler to get his Creativity Machine recognized as the creator of the work.

According to the USCO, in 2018 Thaler sought to register the work, identifying the author as the “Creativity Machine”. Knowing the well-established position of the Copyright Office not to grant copyright to works created by non-human efforts, such as works created through AI, Thaler tried to invoke the process of recordation, whereby owners may transfer their copyrights to others, and have this transfer recorded by the Office. In this case, he claimed that the work “was autonomously created by a computer algorithm running on a machine” and he was seeking to register this computer-generated work as a “work for hire” to the owner of the Creativity Machine, himself. When Thaler’s application was refused, he appealed, arguing not only that the requirement for human authorship was unconstitutional but also that copyright law already allows non-humans (i.e. corporations) to be authors under the “work for hire” doctrine. Although his appeal was denied by the USCO he appealed yet again, this time to the Review Board. In refusing to revisit the Office’s decision to reject the application, the Review Board cited multiple precedents where human authorship was established as a prima facie requirement for copyright registration. As for the “work for hire” argument, it pointed out that a work for hire must be prepared by an “employee” or those who have contracted to produce a work for hire. In both cases a binding contract or agreement is required. A machine cannot enter into a contract, whereas a corporation (a legal entity) is able do so. Moreover, the work for hire doctrine only identifies the owner of the work, not whether it is protected by copyright. And since it was not created by a human, it cannot be subject to copyright. Case dismissed.

One can speculate as to why Thaler was so insistent on imbuing his Creativity Machine, for which he holds US Patent no. 5,659,666, with the attributes of authorship. Could it have something to do with raising the profile and marketability of this invention? You can imagine the coverage. “Art produced by AI Machine first to be awarded Copyright Protection.” Despite Thaler’s persistence, it ain’t gonna happen. Sorry Steven, or is it Stephen, but it was a valiant effort. You should have registered the copyright in Britain, where there is a provision for copyright protection for machine-generated works where no human author of the work is involved. (see below)

Blogger and copyright scholar Stephen Carlisle has speculated that one outcome of the doctrine that stipulates machines cannot be authors will be that humans will claim for themselves works that have been produced by machines. In other words, they will lie to gain the protection of copyright for works created autonomously by AI. It is worth noting that Thaler’s position was exactly the opposite. He went to great lengths to specify that the work for which he was seeking registration was the creation of his Creativity Machine, not him. The Review Board anticipated the argument that denying copyright protection for machine generated works will encourage dishonesty and inaccurate claims of authorship by noting the criminal penalties that exist for those knowingly making false representations, plus the fact that registration can be cancelled if invalidly filed. In Britain a similar concern has been raised since works produced by human authors get a longer term of protection than purely machine-generated works. Like the USCO, the British authority considers that legal provisions regarding fraudulent representations provide sufficient protection against such activities.

The question of whether a machine can actually create an original, copyrightable work without human intervention is controversial. For now, in most jurisdictions, there is still the presumption that despite the use of machine learning, software, and technology, there is still a human hand behind original works. For example, a couple of years ago a Chinese court addressed this issue, finding that an automated article written by a program called Dreamwriter, created by Tencent, which had been copied and published without permission by another Chinese company, Yinxun, was nevertheless subject to copyright protection because it met the originality test through the involvement of a creative group of editors. These people performed a number of functions to direct the program, such as arranging the data input and format, selecting templates for the structure of the article, and training the algorithm model. In other words, there was sufficient human authorship—and thus there was copyright, and infringement of copyright.

Nonetheless, at times it is a fine line when it comes to assessing how much human creativity has gone into a machine-enabled work. Carlisle provides the example of Dmitry Evgrafov, the writer of computer code for an app called “Endel”. Warner Music signed a deal to distribute 600 tracks created by Endel. What is significant is Evgrafov’s claim that the music was produced with a few clicks of a mouse, with minimal human involvement outside of chopping up the audio and mastering it for streaming. There are machines that allow people to compose music (even though these people don’t even have the ability to read or write music), by manipulating various musical elements, such as rhythm, harmonics, etc. to blend them into a “new” piece of music. How much human creativity is involved? Not much, so is the end product a creation of the software, or is it a human creation enabled by the software?

If AI is a current test case of human authorship, one cannot help but also reference the role of animals arising from the infamous Monkey Selfie case, about which I and others have written. (The Monkey Selfie Case: Applying the Common Sense Test”) and (“The Monkey Selfie Case: Will It Have Broader Repercussions for AI and Copyright?). In this case, the issue revolved around whether wildlife photographer David Slater was the author of the monkey’s selfie because he had not himself clicked the shutter, despite having artistically staged and made technically possible the taking of the photo. Although that was the question, the judge in California ended up ruling whether the copyright belonged to “Naruto”, the macaque that was the subject of the photo, rather than whether the work was Slater’s or in the public domain.

Naruto’s claim was a publicity stunt by PETA, the Society for the Ethical Treatment of Animals, but it ended up financially ruining Slater. Of course, Naruto’s claim (presented on behalf of the animal by PETA as a legal “next friend”) was dismissed, but there were some interesting moments. During the trial there were questions raised as to whether Naruto’s “children and grandchildren” could be handed the copyright, (and how they would be identified) were it to be awarded, and whether Naruto was obliged to send letters to the other monkeys in the group to let them know about the court case, since some of them had also been inadvertently included in photos snapped by Naruto. Just as with the Creativity Machine, it was apparent that a non-human could potentially create something that might qualify as a work, but that an inanimate creator could not exercise any of the other attributes necessary to establish copyright. If a reason is needed to deny copyright protection to works not produced by humans, this is a good one, but still does not finally decide the question as to whether, inevitably, there is always a human creator behind the non-human.

Think of elephants, those well trained, hard-working creatures that not only allow us to ride them (sometimes), but which have been harnessed for centuries for logging, waging war and other human-directed endeavours. Give an elephant a paint brush, a couple of buckets of paint, and a canvas and you might get something that is not worse than some other forms of modern art. If it qualified as an original work, I would award the copyright to the human who created the framework for all this to happen. It is no different than using other means to create art; automatic spray guns, nude bodies smeared in paint, and so on. In this case, the instrument just happens to be an elephant rather than a machine. But someone created the concept and made it happen. A human.

Those who argue that the definition of author or creator should be expanded beyond humans point to the lack of specificity in the laws of most countries when it comes to defining who is an author. The Berne Convention, the cornerstone of international copyright protection, is a case in point. First drafted in the 1880’s it protects “authors”, but nowhere does it define the term. Rather, it protects “every production in the literary, scientific and artistic domain, whatever the mode or form of its expression”. The drafters of the convention could not conceive of a literary, scientific or artistic work that was not the result of human endeavour. So, human creation is implicit. In US law, the Copyright Act protects “original works of authorship”, but without providing a definition of what an original work is. However, the USCO Review Board in Thaler’s case noted that “Courts interpreting the Copyright Act, including the Supreme Court, have uniformly limited copyright protection to creations of human authors”, and cited various references where courts have referred to authors as humans or “a man”, presumably in the sense of “mankind” rather than a gender-specific reference.

While it is generally accepted in most jurisdictions that human creation is an essential ingredient to claim copyright protection, there are some exceptions. Earlier I referred to the United Kingdom. Under the Copyright, Designs and Patents Act, 1988, there is a provision (Section 178) that states computer generated works are works “generated by computer in circumstances such that there is no human author of the work”(emphasis added). These works are currently provided with a more limited term of copyright protection (50 years from the date of creation) as opposed to the standard life plus 70 years provided to works made solely by human authors or authors assisted by AI. (The rights-holder for a fully computer-generated work is “the person by whom the arrangements necessary for the creation of the work are undertaken”, even though that person played no creative role.) The UK Intellectual Property Office (UKIPO) is currently reviewing provisions for protection of AI under the Act, with possible changes ranging from a requirement that human creativity must be involved in order to claim copyright protection, (in other words dropping machine-generated copyright entirely), to reducing the protection for computer-generated works to as little as five years, to retaining the status quo.

In re-examining whether computer-generated works should benefit from copyright protection, the UKIPO consultation paper has set out three considerations; the need to (1) encourage innovation in AI technology and promote its use for the public good; (2) preserve the central role of intellectual property in promoting human creativity and innovation (emphasis added); and (3) be based on the best available economic evidence. Arguments put forward for dropping machine-generated copyright include concerns that protecting computer-generated works may promote these works at the expense of human creations and devalue human creativity. (A machine can work 24/7; humans need to sleep and eat). Proponents for reducing the period of protection for computer-generated works argue there is no need to incentivize a computer to create works, yet protection imposes costs on third parties. Finally, those who support copyright for works produced exclusively with AI maintain that this provides additional economic incentive to invest in AI technology. The public consultation closed on January 7 of this year. No doubt we shall have to wait a while to see what proposals the Office brings forward.

As I wrote in an earlier blog (“AI, Ethics and Copyright”), one factor to consider with regard to the use of AI in creating works is ethical judgement, something only a human can provide;

Despite enablement by machines and the creation of “new” works using AI through compiling, synthesizing, modifying and re-arranging, at the end of the day there will always be a human creator behind the AI who will judge whether and when the AI work is complete. This person will ultimately take a position on any ethical questions that may arise and the output will be subject to human judgement in terms of choices, editing, finalization, and release.”

I believe that humanity and copyright must go hand in hand. The US Copyright Office apparently agrees although Stephen Thaler clearly does not. The Intellectual Property Office of the UK appears unsure.

What do you think?

© Hugh Stephens 2022. All Rights Reserved.

Requiring Digital Platforms to Pay for Using News Content: Canada Follows Australia’s Lead—But Not Precisely

Earlier this month I wrote about online safety legislation and noted how Canada is following Australia’s lead in developing a regime that would, among other things, create a Digital Safety Commissioner (called the eSafety Commissioner in Australia), as well as imposing requirements on internet platforms to monitor for, and expeditiously take down, certain types of harmful content. Australia’s updated legislation went into effect in January of this year while Canada is still in the public comment stage for its proposals. Another area where it seems that Canada intends to take a leaf out of Australia’s book is with regard to requiring large digital platforms to reach revenue sharing agreements with news content providers. The two countries share much in common, including the heritage of a Westminster-style government, but just the same there are some significant national differences, and these will likely be reflected in somewhat different versions of legislation in both cases, online safety and news revenue sharing.

Regarding the latter, Canadian Heritage Minister Pablo Rodriguez has said that legislation will be introduced “as soon as possible”. It was included as one of the primary “to do” tasks in his mandate letter when he was appointed minister in December of last year. The operative paragraph read;

Swiftly introduce legislation to require digital platforms that generate revenues from the publication of news content to share a portion of their revenues with Canadian news outlets to level the playing field between global platforms and Canadian outlets. This legislation should be modelled on the Australian approach and introduced in early 2022.”

Rodriguez has recently said that while the Canadian legislation will be modelled on the Australian version, it will differ in several ways. One change would be to delegate the power to decide which digital platforms will be required to negotiate with news content providers to an arms-length regulatory agency, rather than to a government minister. In Australia, although the News Media Bargaining Code was developed by the competition regulator, the Australian Competition and  Consumer Commission (ACCC), and the ACCC administers the code, the decision as to which online entities will be subject to it rests with the Australian Treasurer, equivalent to Canada’s Minister of Finance. Another agency, the Australian Communications and Media Authority (ACMA), is responsible for determining eligibility of news media businesses to participate in the code. During the lead up to passage of the Australian legislation, only two platforms (Google and Facebook) were named as potentially being subject to the Code, and both mounted strong campaigns to stop the legislation. See “Google’s Latest ‘Stoush’ with Australia” and “Facebook in Australia: READY, FIRE, AIM”.

In the end, both buckled and came to agreements with Australian publishers, large and small. As a result, the Treasurer did not proceed to designate any platforms under the legislation that brought in the Code, and thus, at the present time, the Code is not operative. However, it achieved its goal without having to be invoked. As the ACCC puts it, “While the Treasurer has not designated any digital platforms or services to date, the ACCC considers that existence of the code and the threat of designation is having the appropriate and hoped for impact”.

Rodriguez said that Canada may also require greater disclosure of deal terms, which in the Australian case are kept confidential. Some news outlets in Canada, especially the smaller ones, are in favour of market transparency while larger players generally are not. Already some outlets in Canada have reached agreements with Google to be part of Google’s News Showcase, but others are still holding out, hoping for a better offer. The legislation will apparently establish a regulator and impose arbitration if reasonable terms cannot be agreed upon between the designated platforms and news providers, which will also include electronic media (radio and TV). Rodriguez wants to avoid any impression that government will be involved in picking winners out of this process.

News Media Canada, the umbrella industry group for the news publishing industry, (self-proclaimed as “the voice of the print and digital media industry in Canada”) has been pushing hard for the legislation. Back in July of last year, it encouraged many of its members to publish a front-page ”open letter” to Prime Minister Justin Trudeau calling for immediate action. That didn’t happen as an election intervened, one in which Trudeau’s Liberal Party, which had hoped to convert its minority status to a majority government, instead came back into office with roughly the same seat count. It still forms a minority government and therefore has to rely on the support of at least one of the major opposition parties to pass legislation. In the case of the news media bill, the Liberals can likely count on support from both the mildly leftwing New Democrats, who will be happy to ensure that large digital platforms pay their “fair share”, and the Quebec based Bloc Quebecois, which will be happy to be seen to support francophone media.

While there are several major news publishers in Canada, such as Nordstar which controls the Toronto Star, PostMedia, which publishes the National Post and many regional papers, the Globe and Mail and Quebecor which publishes French language dailies like Le Journal de Montréal, there are none that are as powerful politically as the major Australian papers, especially the ones owned by Rupert Murdoch. Newspapers in Canada just don’t have the same political clout as those Down Under. At the same time, the challenge facing the Canadian news industry in terms of revenues lost to digital platforms is as acute as elsewhere. It is reported that the two dominant digital platforms (Google and Facebook) took 80 percent of the $10 billion in online advertising revenues in 2020, according to Minister Rodriguez.

Beyond commercial considerations, Rodriguez is also invoking as justification for the new legislation the need to maintain a professional, credible, non-partisan media as a means to combat “fake news”, a phenomenon that was apparent during the truckers’ protest occupation of downtown Ottawa in February. Those protests saw several attacks on media reporters as they were doing their job.

Whether Canada’s legislation, when introduced, will be as successful as that of Australia in pushing the platforms to come to the table with reasonable offers for their use of news content, remains to be seen. Rodriguez is touting the Canadian legislation as being more transparent with less political involvement than that of Australia, perhaps a second mover advantage. However, there is no question that the firm stance of the Australian government in standing up to Google and Facebook (now called Meta) has made the task of other governments, like Canada, considerably easier. The US is also studying the Australian model.

The platforms have smelled the coffee and are finding ways to strike deals with news providers, in order to keep governments out of their hair. For that, Canada—and the people of Canada– owe the Australian government a debt of gratitude.

© Hugh Stephens 2022. All Rights Reserved.

Russia’s Invasion of Ukraine: It’s Also a Culture and Content War

Source: Wikimedia commons

As you, dear reader, will know, this is a blog focused on copyright and content related issues, rather than politics, although at times there is an inevitable political spillover when dealing, for example, with the copyright dimensions of trade agreements, given that many such agreements are driven as much by political objectives as economic considerations. Given current political circumstances, and the horrific images we see daily as a result of Russia’s appalling and brutal invasion of Ukraine, I have decided to dedicate this week’s blog post to the people of Ukraine and their brave and tragic defence of their homeland. Since so much has already been written about what is happening there, and why, I will try to look at the issue from a slightly different perspective-its cultural and content implications, for Ukraine, for Russia, and for the rest of us.

When it comes to IP protection, and respect for copyright, neither Russia nor Ukraine are exactly paragons of virtue. In the most recent edition of its global IP index, the Global Innovation Policy Center (GIPC) of the US Chamber of Commerce, ranks Russia and Ukraine as numbers 32 and 41, respectively, out of 55 countries. The index measures IP performance across a range of categories, including patents, copyrights, trademarks, design rights, trade secrets, commercialization of IP assets, enforcement, systemic efficiency, and ratification of major international treaties. In the specific area of copyright, there are seven indicators, copyright term of protection, legal measures providing exclusive rights to prevent infringement, injunctive-style relief, frameworks that promote cooperative action against online piracy, scope of limitations and exceptions, existence of technological protection measures and policies requiring that software used by government be licensed. Out of a possible copyright score of 7, Russia clicks in at 2.74 and Ukraine at 1.83. In the category of licensed software, both get a big fat zero. This makes the threat by Russia to legalize software piracy (by creating a compulsory licensing mechanism for software, databases, and technology from companies based in countries that have imposed sanctions on  Russia) somewhat less threatening since software piracy is in excess of 90% anyway.  

The weak IP track records of both countries is perhaps not surprising given the history of Ukraine as a country that has struggled to establish itself as an independent entity with a free market economy after emerging from the collapse of the Soviet Union. It is perhaps less understandable in the case of Russia but even here we have a country that has struggled with the legacy of a past dominated by central planning where IP was a commodity that could be expropriated by the state or stolen for national purposes. War will not improve this situation.

Although western countries have imposed financial sanctions affecting banks and credit card companies, there have also been numerous voluntary actions taken by western companies in non-financial sectors. So far, among tech and software companies, Microsoft, Apple, Dell, Samsung, Adobe, Cisco, IBM and Intel, have announced they will be stopping sales to Russia. Netflix has announced that it is pulling out. The company had refused to carry Russian news channels on its local-language service, despite a new law requiring it to do so, and has suspended all projects and content acquisitions in Russia. Sony, Warner Bros, and Disney have announced a pause on new film releases. Facebook has barred Russian state media from running ads, as has Google, which has also removed Youtube channels associated with state-controlled Russian media.

When it comes to using the media and content platforms to get the message out regarding what is really happening in Ukraine, Volodymyr Zelenskyy is running rings around Vladimir Putin. Zelenskyy has won the admiration of the world for his valiant personal stand, and ringing, heartfelt appeals for help. Dressed in fatigues and with the daily strain showing clearly on his stubbled face, the comparison with the clinical, almost chilling demeanor of Putin, perched at the far end of a marble table 35 feet from whoever he is speaking to, could not be more stark. Putin has blocked western media and social media and cracked down on any domestic coverage that does not fit his narrative of “liberation”. (It seems he is following the adage of “we had to destroy it to liberate it”). In response, the news broadcaster RT (formerly Russia Today) has been removed from TV screens in Europe, the UK, North America and Australia. From both a tech and content perspective, Russia has been put on the blacklist. Will this mean that Russian authorities will now roll back their recently strengthened enforcement of copyright laws which, according to the 2022 GIPC report, has resulted in “a decrease in online infringement”? Probably.

Culture is another important element of the current crisis; specifically whether Ukraine as a nation has a culture that is distinct from that of Russia. It is an inconvenient fact for Mr. Putin that Ukraine not only has a vigorous, distinct culture, but that Ukrainians are determined to defend and preserve it. Russia, of course, has its own rich cultural heritage—think Dostoevsky to Solzhenitsyn—and there is no doubt that Ukraine and Russia share a number of common Slavic cultural roots, but it is also a fact that Ukraine has its own rich literary and artistic history.

With the invasion of Ukraine, its museums have been scrambling to rescue invaluable cultural artifacts. Museums have been shelled, some destroyed. As some have commented, the destruction of these artifacts amounts to a form of cultural cleansing. Jeremy Maron, curator of genocide at the Canadian Museum of Human Rights is quoted by the Globe and Mail as saying;

“Putin views Ukraine not as an independent nation, but as a part of Russia that was stolen from Russian control. In his perspective, it seems like there is no such thing as Ukrainian culture, so the cultural artifacts are fake evidence of the fake Ukrainian culture.”

UNESCO has expressed its deep concern over the damage to cultural sites. There are seven UNESCO cultural sites in Ukraine including the 11th century St. Sophia Cathedral, that has been instrumental, according to UNESCO, in “the spread of Orthodox thought and the Orthodox faith in the Russian world from the 17th to the 19th century.” It is now under threat from Russian artillery shells and missiles. The Russians are shelling part of their own shared heritage.

Where this will end, (at least at time of writing), is frustratingly unclear. Ukrainians are both resisting and fleeing the carnage, especially women and children, leaving their men behind to fight. Russia’s ultimate end game is not yet clear, although the outcome for the people and culture of the Ukraine does not look positive. Many will flee, some no doubt to Canada and the US, which already host the second and third largest Ukrainian diasporas globally, 1.4 million and 1 million, respectively. (Russia is home to the largest number). Even if Russia installs some sort of puppet regime in Kyiv or attempts to reincorporate Ukraine into Russia, a Ukrainian identity and culture will survive and endure as it has for hundreds of years. And the message of what is really happening in Ukraine will reach the Russian people. Today, no one can block content forever.

Ultimately, Mr. Putin will fail. The Russian army cannot eliminate the soul of a people. At the moment, we are all Ukrainian.

© Hugh Stephens, 2022. All Rights Reserved.

Grappling with Online Safety Legislation: How to Hold the Platforms Accountable

When it comes to online safety—or its flip side, online harms—many countries are grappling with the problem. What is the role of government in establishing guidelines and regulations for the protection of citizens, particularly vulnerable segments of the population, from a range of harms perpetrated by anti-social and even criminal elements via the internet? What is the role of “internet intermediaries”, the internet distribution and social media platforms that the perpetrators use to attack their victims? Are the platforms simply innocent third parties whose services have been hijacked, like phone lines being used by fraudsters to scam victims, or do they play a more active role? Should they be expected to be as much a part of the solution as part of the problem? And if they are to be expected to play a role in controlling harmful content, what is that role and how should they be encouraged (or compelled) to go about it? And, for that matter, what exactly is “harmful content”?

These are among the many questions being examined in various jurisdictions; in the UK through its draft online safety legislation, in the US, where the focus has been on amending the notorious Section 230 legislation that has been interpreted by the courts to shield the platforms from responsibility for harmful content they distribute, and in Canada. Australia has already enacted legislation. All are approaching a similar problem but in different ways, although there are some common themes. Let’s look at both the Australian and Canadian approach and see what lessons they may hold.

The Australian legislation, known as the Online Safety Act, was passed in mid-2021 and came into effect in January of this year. Australia was the first country to establish an eSafety Commissioner in 2015. Initially the remit of this office was limited to online safety for children but in 2017 the mandate of the Office was expanded to cover all Australian citizens. With the implementation of the Online Safety Bill, the Commissioner’s role will expand. The Commissioner has an education, research, investigatory and a limited quasi-enforcement role. (i.e. it can issue blocking requests and blocking orders). The new Act updates and consolidates earlier legislation dealing with cyberbullying of children and, in the words of the Australian government;

“…retains and replicates certain provisions in the Enhancing Online Safety Act 2015, including the non-consensual sharing of intimate images scheme; specifies basic online safety expectations; establishes an online content scheme for the removal of certain material; creates a complaints-based removal notice scheme for cyber-abuse being perpetrated against an Australian adult; broadens the cyber-bullying scheme to capture harms occurring on services other than social media; reduces the timeframe for service providers to respond to a removal notice from the eSafety Commissioner; brings providers of app distribution services and internet search engine services into the remit of the new online content scheme; and establishes a power for the eSafety Commissioner to request or require internet service providers to disable access to material depicting, promoting, inciting or instructing in abhorrent violent conduct for time-limited periods in crisis situations.

That is a lot of power, and it would seem similar to what the Canadian government is trying to achieve with its “online harms legislation”. However, even though the proposed online harms definitions in Canada are much more tightly constrained than under the Australian legislation or the draft Online Safety Bill being examined in the UK, there has been a lot of push-back from various groups. In 2021 the Department of Canadian Heritage issued a discussion paper laying out a proposed approach to address harmful online content, inviting public comment. And feedback they got.

As is the case with Australia, the proposed legislation would establish an office to oversee the legislation, called a “Digital Safety Commissioner”. In Canada, the Commissioner’s responsibilities would focus primarily on administration and enforcement, including the power to receive complaints, conduct inspections for compliance, issue public reports and compliance orders, recommend administrative penalties up to $10 million dollars, refer offences for non-compliance to prosecutors with fines reaching up to $25 million, and, in exceptional circumstances, apply to the Federal Court to seek an order requiring Telecommunications Service Providers to implement blocking or filtering in cases involving child sexual exploitation and/or terrorist content. This targeted use of site-blocking, which I wrote about earlier (Site-blocking for “Online Harms” is Coming to Canada), would be used as a last resort and applied to offshore sites beyond the reach of Canadian courts. The Commissioner’s Office would be backstopped by a Digital Recourse Council (the title is self-explanatory) and both the Commissioner and Recourse Council would have the benefit of an industry Advisory Board. Legislation would be restricted to the following five categories of “harms”, all of which fall under the criminal code; (1) terrorist content; (2) content that incites violence; (3) hate speech; (4) non-consensual sharing of intimate images; and (5) child sexual exploitation content.

Not covered are a range of harmful but possibly legal activities and content, such as cyberbullying, defamation, online harassment, disinformation, false advertising and so on, the so-called “awful but lawful” content. Notably, the proposed UK legislation would cover such content, at least insofar as dominant platforms are concerned.

Despite its relatively narrow focus, and the fact it would establish an independent regulator subject to a Recourse Council, the proposed Canadian legislation has been heavily criticized by civil liberties and “internet freedom” groups. Some have objected to the requirement for platforms to inform law enforcement of illegal activities, claiming this would amount to unauthorized surveillance, although the criminal reporting requirement would be limited to cases involving “serious imminent harm” or threats to national security. I find it hard to imagine that any responsible business would deliberately turn a blind eye to such information and the legislation would give them needed legal cover to act. Indeed, it would require them to do so.

Other critics complained that the requirement for platforms to monitor for harmful content on their services and take it down within 24 hours of being flagged would result in censorship, especially given that automated screening mechanisms would likely be used, possibly resulting in “false positives” and overzealous screening. With regard to the proposed remedy of site blocking for offshore sites distributing material that is sexually exploitive of children or which promotes terrorism, critics complained this would violate net neutrality. This is the usual canard trotted out when there is any suggestion that some reasonable controls should be placed on content on the internet, even if illegal and subject to court review. But net neutrality has nothing to do with permitting illegal content to remain online in defiance of court orders.

For some critics, any regulation of the internet is too much. Nonetheless, as is the case with any government regulatory intervention, it is important to get the balance right between necessary oversight and restraint, and the lightest regulatory touch needed to preserve individual freedoms. The consultation process allowed for that input and the results were published in February in a government document titled, “What We Heard”. Somewhat unusually, submissions were not made public, apparently out of concern that some groups would not want their experience with harmful content shared publicly, although other groups released their submissions for public consumption. Instead, the government summarized input from the 422 unique submissions using opaque language such as “some stakeholders”, “multiple respondents”, “a few respondents”, “certain respondents”, “a select few”, and “a majority of respondents” etc. We are left to guess at who said what. It is hard to summarize the input on the multiple elements of the proposal, but one phrase perhaps best encapsulates the feedback received; “Regarding the substance of the proposal, although multiple individuals and organizations welcomed the Government’s initiative, only a small number of submissions from those stakeholders were supportive, or mostly supportive, of the framework as a whole.”

This suggests that there will be a return to the drawing board and perhaps more consultations, but it is clear that the government intends to proceed in establishing measures to help ensure online safety. For one thing, it is in the mandate letter of the responsible minister. This is buttressed by the line in “What We Heard” noting that;

 “Almost all respondents commented that Government regulation has a role to play in addressing online harms and in ensuring the safety of Canadians online.

A digital safety regulator will be part of the solution.

The introduction of the regulatory bodies was broadly supported, as many respondents thought the new regulators seemed fit for purpose.”

Reading through the comments on the various elements of the proposal, it is evident that comments were all over the map, no doubt reflecting the background of various intervenors from individuals (350), civil society (39), industry (19) and academics (13). Then there were roughly 9000 “click and submit” interventions organized by cyberlibertarian lobby groups like Open Media. Many of the submissions contradicted each other. And of course, even counting in the Open Media claque, this is hardly a representative sample of what 36 million Canadians think, and the government knows this. While I can’t prove it, I am willing to bet that most Canadians are fed up with toxic, dangerous, and predatory content on the internet and want the platforms to take some responsibility for what they allow to be distributed. They also think there is a role for government to play in ensuring that the platforms, who until now have managed to duck accountability for content that they permit, sometimes promote, and often indirectly profit from, are held to account. (The UK is proposing an interesting accountability feature; a “designated executive” who will be personally held to account for a new criminal offence of failing to deal with “repeated and systemic failings that result in a significant risk of serious harm to users.”)

Canadians are not the only ones concerned. Online safety is a hot button issue in many democracies. While Australia is leading the way, the UK and Canada are moving ahead at their own pace and the US is addressing (slowly) Section 230 and kid’s online safety. The children’s safety issue even rated mention in the U.S. State of the Union address delivered by President Biden on March 1. It’s important to get the balance right, but equally important not to let the platforms off the hook. With great power (and great profits) comes great responsibility.

© Hugh Stephens 2022. All Rights Reserved.

%d bloggers like this: