Those Problematic Comments on Facebook—Who Bears Responsibility for Them?

http://www.shutterstock.com

Unless you have been living in a cave, you will be well aware of Facebook’s current travails, fed by whistle-blower Frances Haugen’s explosive testimony about how Facebook researched but ignored findings that suggested the company’s algorithms were harming individual users by promoting content that kept them engaged—but at a cost to their mental wellbeing. In other cases, Facebook promoted user “engagement” over such basic considerations as factual accuracy, impact on community health (COVID misinformation), public safety (January 6 attack on the US Capitol) and avoidance of sexual exploitation. Facebooks use of self-reinforcing algorithms to maintain a closed loop of content and the creation of “echo chambers” for users, especially users addicted to fringe conspiracy theories and other non-mainstream views, is one problem, as I wrote about recently. It’s spotty record in content moderation is another. Part of the blame for this lies with a pernicious and widely abused piece of legislation, Section 230 of the 1996 Communications Decency Act. While this legislation may have been well intentioned at its outset, over the years it has been interpreted by US courts as providing blanket immunity from liability for internet platforms with regard to any user-generated content that they distribute, including user comments on posts. And digital platforms like Facebook have acted accordingly; namely done as little content moderation as they can get away with.

While Facebook is not liable for whatever content is posted in user comments, in an interesting wrinkle coming out of a defamation case in Australia, parties who posted content to Facebook are having to defend themselves against liability for comments made by others about that content. So far this has been limited to news publishers who used Facebook to reach a wider audience for their content. Traditionally, publishers have been expected to moderate content that they publish. Among other things, they are expected to avoid publishing libellous comments from users, (such as in “Letters to the Editor” columns), by exercising editorial oversight. Likewise, news outlets moderate the content of their own websites, including comments posted by readers. But what about comments posted by readers to Facebook? They could be seen as similar to a “letters” feature in a newspaper but carried (distributed) by Facebook rather than by the publishers themselves, either in their paper or on their own websites. But Facebook is off the hook.

The defamation case in question involves a suit by an Australian citizen, Dylan Voller, against two Australian media outlets, the Australian and Sky News, for comments made on their Facebook pages by other Facebook users. The Australian court held that the media outlets were liable for the comments left by readers in response to content that the outlets had posted to Facebook. Whether the comments themselves were defamatory has not yet been decided as the ruling focused on who was liable for the comments. The Rupert Murdoch owned outlets had offered the defence that they were neither able to moderate the comments on Facebook’s platforms, nor switch them off, because these were controlled by the platform. At the time, Facebook did not allow comments to be switched off in Australia. Why? The comment feature encourages user engagement, and this helps build Facebook’s bottom line. After the initial Australian court case in 2019, Facebook finally agreed to allow news outlets to delete comments but insisted that they could only be disabled one post at a time. Eventually, earlier this year, Facebook changed its policy and announced that all users will have the right to control who can comment on posts.

In Canada, the CBC has now decided to shut down all Facebook comments on its news posts. Initially, it did so for just a month, citing the “vitriol and harassment” that its journalists face on social media, but it has now made that decision permanent. The CBC notes that:

We know certain stories will draw out obnoxious and hateful comments. The truth is we spend a considerable amount of attention and resources attempting to moderate our Facebook posts. It takes a mental toll on our staff, who must wade into the muck in an effort to keep the conversation healthy for others. It is not sustainable.”

In explaining its decision to make the disabling of comments ongoing, it continued;

we were seeing an inordinate amount of hate, abuse, misogyny and threats in the comments under our stories. Our story subjects were attacked. Other commenters were attacked. Our journalists were attacked. Misinformation and disinformation were rife.

As a result of this toxicity, we were posting fewer of our stories and videos to Facebook, knowing our story subjects would face certain abuse and particular communities would be targeted. When we did share stories to Facebook, we spent a huge amount of time and effort cleaning up the sludge.”

This is distressing but it is the reality. There seems to be something about the internet that encourages toxicity in public discourse amongst a small but vocal minority, who seem to have nothing better to do than resort to hateful, racist, misogynistic comments and personal attacks, something that would never be tolerated in the offline world. People engaging in that kind of behaviour would either not have a platform to propagate their bile or they would be shut down, pronto. Is it the anonymity of the internet, or the knowledge that small, marginal voices will be amplified and given greater credibility by the nature of the social media platforms they inhabit that encourages this behaviour?

One cannot blame Facebook for the dark side of human nature, but one can reasonably expect it to step up and own what it has created–and address the problem. Just as news publishers in the past kept the crazies out of the limelight, so too should we not have an expectation that the world’s largest social media platform ought to exercise greater editorial oversight over what it distributes? Note that in the case of the CBC, and media outlets in Australia, the publisher of the material that was the target of negative comments on Facebook had to take on the task of moderating the content. In the US at least, Facebook can hide behind Section 230 immunity for civil liability.

In an article examining the “Unintended Economic Implications of Limited Liability Regimes for Online Platforms”, German commentator Stefan Herwig notes that Facebook does not have to factor in “economic externalities” because it has been able to offload its costs to others when it comes to algorithmic amplification and content moderation policies. These include “journalistic research corrective agencies” for dealing with disinformation or police investigative agencies with regard to hate speech or terrorist propaganda. However, the principle should be the same as with environmental contamination; the polluter should pay. Facebook does, of course, undertake some content moderation—content that violates its terms of service and content that potentially violates criminal law, an area where no platform immunity exists. But it does so at a minimum cost, automating where it can and outsourcing to the lowest cost (often offshore) provider where it cannot. The result is that automated systems often get it wrong, either blocking content that should not be blocked, as in this recent case highlighted in the Canadian media, or else not blocking content that should be blocked. One could argue that Facebook is damned if it does, and damned if it doesn’t but it is hard to feel sympathy for a trillion dollar company that dominates social media and has made scale its prime asset. If scale brings with it additional costs in terms of hiring real people to perform essential tasks to keep content within socially and legally acceptable bounds, that is part of the price that has to be paid.

While deeper user engagement is good for business, Facebook may find that it has embarked on an increasingly risky path. If reputable organizations are becoming increasingly reluctant to post content to the platform because of the proliferation of irresponsible, vindictive and defamatory comments, this is eventually going to hurt the company’s bottom line. One way to “encourage” Facebook (and other platforms) to take a more active role in moderation would be to modify the blanket Section 230 civil immunity that is provided, requiring platforms to accept more responsibility in cases where users or society are being subjected to real harm from the dissemination of damaging user-generated content.

Supporters of Section 230, like Prof. Eric Goldman of the Santa Clara School of Law, claim that elimination of platform liability immunity will curtail “niche non-majoritarian voices”. Obviously not all non-majoritarian voices are down in the gutter, but some are, and giving them a platform to spread their poison serves no useful purpose. (See “Thank You Professor! Explaining Section 230 to Canadians”). Short of fixing Section 230 in the US—and enacting measures in other countries to hold the platforms responsible for content they distribute and profit from—the only viable solution seems to be to switch off user-generated comments because the bad outweighs the good, especially if or when the burden of legal responsibility is placed on the party posting the content and not on the platform nor on the party making the defamatory comments.

There seems to be something wrong with this picture with respect to where the burden lies.

© Hugh Stephens, 2021. All Rights Reserved.

Why is New Zealand’s National Library Declaring War on Authors?

Source: Wikimedia Commons

At first blush, one would think that a natural symbiotic link would exist between authors, publishers, librarians and readers. After all they are all part of what I could call the literature ecosystem, the chain of content that leads from the creation of a work to its dissemination to its consumption, either for entertainment or learning. Librarians love books, so why wouldn’t they love the creators of books? But it’s not all that unusual to find authors and publishers on one side of an issue and the librarians on the other, as is currently happening in New Zealand. In this case, it involves an ill-conceived plan announced by the National Library of New Zealand to donate some 600,000 works it no longer wants to the Internet Archive (IA) in San Francisco for digitization as part of the Archive’s “Open Library”.

Another author-publisher/librarian split famously happened last year in the US when the Internet Archive decided to unilaterally launch its so-called “National Emergency Library” (NEL) in which it removed all lending restrictions on the books it had digitized, and began to freely loan out unlimited digital copies on the grounds that many libraries were closed or had restricted access because of COVID. This move was applauded by the American Library Association (ALA). On March 24, 2020, the same day the National Emergency Library was announced, the ALA tweeted;

“@InternetArchive has announced the creation of a new National Emergency Library with over 1.4 million books available to borrow. With libraries across the country closed, we appreciate IA filling this need.”

The ALA’s support was initially echoed by others such as National Public Radio which, on March 26, published an laudatory piece calling the National Emergency Library a “compelling alternative” (to closed public libraries). It wasn’t long before the media pendulum began to swing the other way, however, as authors’ associations such as the Authors Guild and the American Association of Publishers issued critical statements, accusing the Internet Archive of copyright violation and an “unlawful, and opportunistic attack on the rights of authors and publishers”. Writing in Medium, Adam Holland of Harvard’s Berkman Klein Center for Internet & Society published a good summary of the ebb and flow of the debate over the NEL. The Internet Archive ended its NEL experiment in June 2020, shortly after several major publishers filed a lawsuit against the IA. As I commented at the time, it looked as if the Archive was using the pandemic as an excuse to challenge some of the basic precepts of copyright. Even though the IA ended its Emergency Library, the lawsuit continues as the publishers are challenging the base principle under which the Internet Archive lends digitized works that are still under copyright. The theory the IA uses to justify the way it operates its Open Library is known as “Controlled Digital Lending”.

While the National Emergency Library amounted to Controlled Digital Lending (CDL) on steroids, CDL itself is controversial and is at the root of the publishers ongoing lawsuit. Critics of CDL say that scanning a book to produce a digital version is a form of copying, while supporters claim it is a fair use since it is simply a change of format. However, the problem with substituting a digitally scanned version of a book for the original hard-copy version is that this undermines copyright and destroys the market for e-books. As blogger Neil Turkewitz has pointed out in his piece, “The Internet Archive’s Misguided Effort to Liberate Books”, no-one is going to buy an e-book for their Kindle after they have read a “free” digitized version. Licensing e-books is part of the publishing eco-system that relies on the distribution rights contained within copyright. Libraries are frequent purchasers of e-book licenses, allowing them to lend digital copies to their members. The Internet Archive’s Controlled Digital Lending theory upends this publishing eco-system, depriving authors of the opportunity to fully exploit their rights to their work. Turkewitz sums it up as follows;

“Publishers have established independent and distinct distribution models for ebooks, including a market for lending ebooks through libraries, which are governed by different terms and expectations than print books. IA’s end-run around these differences and restrictions is aggressive and unlawful.”

The best that one can say about Controlled Digital Lending and the Internet Archive’s Open Library is that it is controversial. (The worst might be that it is engaged in book piracy). That being the case, one might want to tread carefully in engaging with the Internet Archive, especially if one was a national library operating under government auspices, such as the National Library of New Zealand (NLNZ). The Library’s mission statement is to “collect, connect, and co-create knowledge to power New Zealand.”  One of the ways it intends to fulfill its mandate is by donating 600,000 “surplus” works, including many under copyright, to the Internet Archive for digitization. This has caused a furor in New Zealand.

It all goes back to last year when the National Library announced it had a problem. It had too many books—many of which had limited readership—and needed to slim its holdings. It wanted to cull 625,000 books from its overseas published collections in order to free up more space to accommodate indigenous New Zealand holdings. It proposed to “re-home” some of these books by donating them to regional libraries, others would be sold in book fairs. Many New Zealanders were not impressed with the way the Library approached the issue or announced it as an “operational matter” within its purview to decide. Accusations of “cultural vandalism” were launched.

Unwilling to back down, the Library announced in July of this year that it had struck a “historic agreement” with the Internet Archive to donate all the books left at the end of the overseas published collections review process to the Archive for digitization and preservation. If the Library’s plan to offload part of its collection of foreign books to book sales and local libraries aroused concern, the plan to donate the works to the Internet Archive generated outright anger in a number of circles, particularly among New Zealand writers and publishers. The concern was not so much that works of New Zealand authors would be sent by the Library to the Internet Archive but rather that the Archive engages in “piracy on a massive scale”. The New Zealand Society of Authors and the Publishers Association of New Zealand declared that “leading authors from New Zealand…have had their books illegally distributed online for free by the Internet Archive, forcing publishers and authors to repeatedly spend time and money taking enforcement action.”

The Publishers Association asked, “How can the National Library stand alongside internet pirates and not New Zealand’s own literary community?” The joint statement ended with;

Authors and publishers will be reviewing all their current relationships with (the) National Library in light of this total disregard for New Zealand books and creativity.”

Fighting words.

But did the librarians at the National Library reconsider their course of action? They did not. Instead, they denied that there were any copyright implications arising from the Internet Archive donation and stated that any authors who wanted their works delisted from the donation could opt-out (leading to the destruction of the work). The onus for opting out is put on the author and is not trivial;

“You will need to provide proof of rights and the unique number for the title you have identified. The unique number is on the spreadsheet. We cannot process requests that do not include the unique number allocated to a title. Follow the steps below to opt out:

Check the list below to see if you hold rights to any titles being donated.

Search the spreadsheet either by author name or publication title. (Pressing the keys “ctrl” and “F” on your computer keyboard will enable the search function.)

Email us with the titles you would like removed. Your email must include proof of rights and the unique numbers of the titles you would like withdrawn.”

British and Australian authors have been warned that if they don’t want their works ending up at the Internet Archive to be digitized, they have to act fast. The Australian Society of Authors and the Australian Publishers Association requested that the National Library of New Zealand proactively seek permission from any rights-holders whose books will be donated to the Internet Archive for digitization, but the Library has refused, leaving the onus on authors and publishers to opt-out. So much for the symbiotic relationship between authors and libraries.

Actually, it is not the donation of a few copyrighted works to the Internet Archive that is the principal concern of contemporary authors and publishers; it is that a reputable national organization like the National Library of New Zealand would put its thumb on the scale of a high-profile copyright dispute between the Internet Archive and international publishers by, in effect, siding with the Archive by donating works to it. They are concerned about the reputational damage done to a New Zealand institution by throwing in its lot with IA , plus the disrespect shown to New Zealand writers by having their National Library choose to work with an organization that has repeatedly damaged their economic interests and infringed their copyrights.

The National Library has chosen to thumb its nose at New Zealand creators and seems intent on forging ahead with its disposal plans. I mean, why not offload your responsibilities to a controversial US organization who will take the books off your hands for nothing and digitize them at no cost to the Library? If a few authors and publishers get hurt in the process, well so what? It’s all pretty sad and I would have expected better of an organization like the New Zealand National Library, and also the New Zealand Government that has authority over the Library.

If the Library truly believes its own rhetoric that its mission is to “co-create knowledge to power New Zealand”, it seems to me that it could start by showing some respect for the writers and publishers who produce the works that is its mandate to collect.

© Hugh Stephens, 2021. All Rights Reserved.

“It’s Not Really Our Fault: It’s the Algorithm”

http://www.shutterstock.com

This seems to be the prevailing view these days amongst the large digital social media and search platforms when the results of algorithmic selections they have programmed turn out to yield undesirable results. The poster child for this is Facebook as revelations from whistle blower Frances Haugen reveal how the company’s policies, and its algorithms, prioritize viewer engagement over safety and responsibility. Strident, opinionated, racist voices get people worked up. They generate traffic. Never mind that they are socially damaging, defamatory, incite violence, and are disconnected from factual reality. Never mind that some users of the service spread inflammatory misinformation about COVID vaccines or pursue other conspiracy theories. The algorithm boosts them and promotes more engagement. Never mind that many teenaged girls, addicted to Instagram, are worried or depressed about their body image. The algorithm that governs what they see makes the situation worse. As reported by the Guardian, “an algorithm tailors the photos and videos that a user sees, potentially creating a spiral of harmful content.”  But it’s not the platform’s problem. It’s the algorithm, you see.

What exactly is an algorithm? We all have a general idea, I am sure. The Oxford Dictionary definition is “a process or set of rules to be followed in calculations or other problem-solving operations, especially by a computer”.

Algorithms are essential for the operation of many things, especially in the internet environment. For example, algorithms screen content for copyright violations and inappropriate sexual images. (They also get it wrong a certain proportion of the time,  such as blocking Ruben’s nudes as well as contemporary explicit images). Algorithms have also been embraced by social media platforms, not only to block or filter but also to promote or recommend. When we talk about algorithms selecting what people will see, or be served, we are talking about a mechanism for sorting content based on relevancy to a user; how likely is it that they will want to see it? How does the algorithm do this? It works by “sniffing out” your preferences, based on likes, frequency of interaction, categorization of the type of content you have already seen, or seem to prefer, the content that you share with others, and so on. While this can be useful, it can also drive users into an echo-chamber where all they hear are voices that sound like their own. This is where the conspiracy theorists and others come in. Social media algorithms are driven by machine intelligence and machine learning, so the more you follow a predictable pattern, the more the algorithm will reinforce that behaviour.

Algorithms are normally proprietary, protected technology and intellectual property, with each platform jealously guarding its “secret sauce” that keeps consumers coming back for more. Given that they are created by “a real person”, even though they develop and grow under the impetus of machine learning, a chain of accountability can be established. It is the same principle as intellectual property, for example a copyrightable work being created by Artificial Intelligence (AI). AI can create musical, literary or artistic works, and these can be protected by copyright. But, as I wrote in a blog last year (AI, Ethics and Copyright),

“If only humans are capable of creating works that are subject to copyright protection (as the US Copyright Office has made clear in a recent interpretation), then how does one deal with works produced by a seemingly autonomous machine?”

This is not an easy question, but the answer increasingly seems to be that the copyright is conferred on the human being who created or directed the program, although it remains under study. Even in the case of machine and AI-generated works, there is always a human hand behind the creation. It may take some ingenuity at times to determine which human hand is predominant, but that is why we have courts. So, if AI can be traced back to a human creator (whoever designed the software program and applied it to the creation of the work), then so too the creators of algorithms must claim ownership, which also means bearing responsibility for what algorithms do.

Taking responsibility is the big issue of the day. Facebook maintains that it keeps its algorithms under constant review and refines them to eliminate undesired outcomes when necessary. But Haugen’s whistle blower testimony suggests that they are slow to act when problems surface and prefer to give priority to business outcomes (longer user engagement which drives more ad revenue) over social responsibility concerns. The criticism of Facebook covers how the company and its algorithms deal with everything from climate change deniers, political conspiracy theories, and sex trafficking to hate speech, young adults, teenage girls and health misinformation. An excellent summary can be found here, (thanks to The Markup.)

Inevitably this draws political scrutiny, in the US and elsewhere. Legislation can be tweaked to incentivize platforms to accept greater responsibility for their algorithms. For example, the US House of Representatives currently has two bills before it that deal with this issue. The first, (labelled the “Justice Against Malicious Algorithms” Act) would make internet platforms liable when they “knowingly or recklessly” use algorithms to recommend content that leads to physical or severe emotional harm by removing the “safe harbour” provisions against civil liability that platforms enjoy under Section 230 of the 1996 Communications Decency Act. Section 230 is a controversial provision that I have written about several times before (most recently, here) that allows digital platforms to evade civil legal responsibility for defamatory, misleading, obscene, racist or otherwise illegal or objectionable content posted by users on the questionable basis that the platforms are not “publishers” and therefore have no control over or responsibility for user-posted material distributed on their platforms. In other words, if this bill becomes law, there will be no platform immunity when the algorithm results in severe harm to the user.

A second bill, the “Protecting Americans Against Dangerous Algorithms” Act, would take this a step further by eliminating Section 230 protection in cases where;

“the interactive computer service used an algorithm, model, or other computational process to rank, order, promote, recommend, amplify, or similarly alter the delivery or display of information (including any text, image, audio, or video post, page, group, account, or affiliation) provided to a user of the service”,

except where

“the information delivery or display is ranked, ordered, promoted, recommended, amplified, or similarly altered in a way that is obvious, understandable, and transparent to a reasonable user based only on the delivery or display of the information (without the need to reference the terms of service or any other agreement)…”

In plain language this means that where a user is manipulated by an algorithm or when an algorithm starts making content decisions unknown to the user, Section 230 liability protection no longer applies. This parallels the recommendation made by Facebook whistle-blower Frances Haugen with respect to Facebooks intentional algorithm-driven ranking decisions. However, who would make the decision as to whether a user is being manipulated is an unanswered question.

Another piece of legislation introduced earlier this summer in the Senate would remove liability protection from technology companies if their platforms spread health misinformation during a health crisis. This could occur when algorithms suggest and boost misinformation that seems to conform to what a user prefers, or that received “likes”, or was shared by a number of users. Small but organized sections of the population can boost particular sources of information, even if inaccurate or misleading, by creating lots of traffic, a tactic to which an algorithm will respond unless very carefully programmed.

Poorly designed algorithms can create their own distortions, as was shown by the case of “Carol’s Journey”. As documented by journalist Brandy Zarodzny, “Carol” was a fictitious user created by Facebook researchers back in the summer of 2019 to study the role of the platform’s algorithms in promoting misinformation to users. Carol’s profile indicated that she was “a politically conservative mother from Wilmington, North Carolina,…(with) an interest in politics, parenting and Christianity (who) followed a few of her favorite brands, including Fox News and then-President Donald Trump.”

How did Facebook’s recommendation algorithm treat this information? Within two days Facebook was recommending that she join groups associated with the conspiracy group QAnon. Within a week, her feed was replete with hate speech and disinformation, content that violated Facebook’s own rules. What did Facebook do with this information? Not much, according to Zarodzny’s report, although Facebook claimed that the research demonstrated how seriously it takes the issue of misinformation.

A problem with algorithms is that they can be manipulated by knowledgeable users to boost some content over others. It is common to see “search optimization” services offered to businesses to boost their rankings on search engines through various techniques that will appeal to and be picked up by the algorithm. Given that the software that sorts and ranks content can be tweaked by users, it follows that those who create and manage the algorithms must also be prepared to defend the algorithms they use. They  need to take responsibility when their algorithms are either hijacked by users, or worse, misused by the platform itself to serve content to users that may be harmful to their well-being but which, like addicted gamblers, they continue to seek out.

Algorithms are here to stay. Like many things, they can be used for good or ill. In the hands of influential social media platforms that recommend content they are a powerful tool. Self-regulation does not appear to be working, so it is not surprising that various legal and legislative remedies are being proposed. Frankly, it is unlikely that any of the current bills in the US Congress will become law at the end of the day. They will likely meet the fate of much legislation that is introduced, debated, but not passed. Nonetheless, the fact that lawmakers are now seized of these issues is important and will hopefully bring more pressure to bear on the platforms to accept greater responsibility for their actions and the tools that they use to engage users. Hiding behind the algorithm won’t work. They need to own the problem and fix it. If stripping away some of their immunities brings about reform and greater accountability, I am all in favour.

© Hugh Stephens 2021. All Rights Reserved

Poppy Time Again: Watch Out for Infringers!

Credit: Royal Canadian Legion

For the past couple of years at this time, as Remembrance Day (November 11) approaches, I have posted a blog on the copyright and trademark implications of the commemorative red poppies that become so ubiquitous on people’s lapels at this time of year, at least in certain countries. In both Britain and Canada, the main veterans’ organizations (Royal British Legion and Royal Canadian Legion) hold the trademark on poppies when used for commemorative purposes (and associated with the phrase “Lest We Forget”). They use the sale of poppy-themed products, and replications of the poppies themselves, as a fund-raising vehicle, with the funds earmarked for veterans’ welfare. They have also copyrighted a number of designs incorporating the poppies that are used on various products.

This year marks the centennial of using the poppy to raise funds for veterans needs while being a mark of remembrance for those who gave their lives in the First World War. In Canada, the poppy was adopted by the Great War Veterans Association (which later became the Legion) in July 1921, persuaded by Anna Guérin. After getting the poppy officially adopted in Canada, Mme. Guérin sailed for Britain to meet with the British Legion, which officially adopted the poppy in September of that year.

Despite the longstanding use of the red poppy by veterans’ organizations, others do not seem to be dissuaded by the Legion’s ownership of the poppy-related IP (intellectual property), nor are they influenced by the fact that the proceeds of sales go to benefit veterans. The internet is rife with all kinds of Remembrance Day poppy-themed products that have no relation to the Legion, with Amazon, Facebook, Shopify and Redbubble being among the main platforms that promote the infringing products. Search “Remembrance Day Products” on Amazon.ca and the first thing that pops up is an ad for a brooch featuring red poppies with the banner “Lest We Forget” superimposed across it, offered by an outfit called “Bling Stars”. Does Amazon care about the Legion’s IP or its mandate? Clearly not.

The problem has existed for several years, and while the Legion tries to go after transgressors, it does not really have the resources to do so. When it does, it goes after suppliers, not users, so you can be assured that if you show up to a cenotaph on November 11 wearing a non-Legion poppy facsimile, you won’t be slapped with a lawsuit. Still, it is discouraging that some suppliers take advantage of the commercial opportunity to flog poppy-related products without any regard for the Legion or the veterans that benefit from the sales of the products.

Interestingly, some of these “unauthorized” designs could well qualify for copyright protection, if they are original, (e.g. posters, or designs used on clothing such as scarves), yet they would at the same time be infringing the Legion’s trademark by incorporating the poppy in the design. The fact that a work can be copyrighted while simultaneously infringing another party’s trademark is not far-fetched. This can occur in photography, when a copyrighted photograph infringes another party’s copyright (such as a photo of a copyrighted painting) or trademark (photo of a trademarked logo), especially if the photograph is used commercially.

When it comes to infringement some people don’t give a damn, but others care and try to avoid it. I was approached by a small company in Indiana that makes decorated mailbox door covers, magnets, decals etc. Patriotic things for rural folks, like the American flag rippling in the breeze, and logos of college football teams. The company had decided to expand to Canada and market mailbox doors with the Canadian flag—and one with poppies! This company was diligent and respectful, wanting to do the right thing the right way. They ensured that their design of the Canadian flag was approved by the Department of Canadian Heritage and submitted their poppy design to the Royal Canadian Legion for feedback. They got approval of their design from Heritage, but the Legion turned them down flat. They were not interested in reworking the designs to make them acceptable (although a bouquet of flowers with a few red poppies in it would be unlikely to be a violation of the trademark, which is restricted to poppies used for commemorative purposes). Perhaps the problem was that this small company had not offered to license the design but were simply seeking the Legion’s approval. In any event, having read one of my blog postings on poppies and the Legion, they asked my advice.

I am certainly not qualified to provide any legal advice in this matter, and I was quick to tell them so. However, I did note that the Legion was zealous in protecting its IP and I could not rule out that they might decide to bring a trademark infringement case. I pointed out that Canadians are attracted to other things besides poppies—like maple leaves. Why not market a design with colourful, autumn maple leaves? The company thanked me for my views, noting that many other designs are popular with Canadians. I am not sure if that means they decided not to proceed with the poppy mailboxes—but I hope so.

I don’t have much else to add to previous blogs about the trademark and copyright issues related to Remembrance Day but do want to take this occasion to urge you to respect the IP of those who hold it if purchasing November 11 memorabilia. The Royal Canadian Legion’s trademark on the poppy was conferred by an Act of Parliament in 1948 as a gesture of appreciation for the sacrifice of veterans. That’s worth remembering.

And for those who missed it the first time around, below is my 2019 blog post outlining how the poppy came to be the symbol of remembrance, plus some of the controversies that have arisen from its use in this connection.

© Hugh Stephens 2021. All Rights Reserved

Remembrance Day Poppies and Intellectual Property Controversies

At the 11th hour of the 11th day of the 11th month in 1918, the guns on the western front in Europe fell silent. An armistice was declared. Germany didn’t formally surrender although it soon collapsed and the 1919 Treaty of Versailles treated it as a defeated country (with the results twenty years later that we all know about). Thus Armistice Day, now known as Remembrance Day in many countries including Canada, the UK and Australia, and Veteran’s Day in the US, was born. And for that reason, I am posting this blog on November 11.

Today, in particular in Canada and the UK, Remembrance Day is marked by the wearing of poppies. They spring up on the lapels of TV news broadcasters, politicians, and members of the public like mushrooms in autumn. In the US, although they are not so ubiquitous as north of the border or in Britain, they are more typically worn on Memorial Day, which is in May, and in Australia and New Zealand I am told that poppies generally blossom around Anzac Day, April 25. (When I was in Sydney last November 11, I spotted just one red poppy, an admittedly unscientific although first-hand survey). It is probably fairly well known (although with today’s young people it is probably wrong to make assumptions) that the wearing of the poppy is a memorial to the sacrifices made by those who fought and died, initially in WW1 for the Allied cause. Poppies grew prolifically in the killing fields of Flanders in Belgium, and still cover the countryside today. The poppies were made famous by the poem “In Flanders Fields” written by Canadian military doctor John McCrae in 1915 after seeing poppies on the battlefield after the loss of his friend in the second battle of Ypres.

It was an American teacher, Moina Michael, who campaigned to make the poppy the international symbol for remembrance of Allied war veterans, and to use their sale for veteran’s welfare. (Another prominent campaigner was Anna Guérin, who took inspiration from Ms. Michael and actively promoted adoption of the poppy). Between 1920 and 1922 the poppy was adopted by veterans’ organizations in the US, Britain, Canada, Australia and New Zealand. In Britain an organization known as the “Poppy Factory”, which still exists today, was set up to manufacture the paper poppies for sale for the support of veterans causes. That leads us into the main point of this blog, the intellectual property (IP) controversies that have arisen around the sale of the poppies. Since Remembrance Day is upon us, I feel compelled to explore whether there is a copyright angle to the wearing of poppies. And surprise! There is.

It’s a bit tenuous, mind you, although there is definitely an intellectual property angle as to who gets to produce and sell poppies. However, it is more of a trademark than a copyright issue but, hey, why quibble? It’s all IP. There have been controversies in both Britain and Canada involving production and sale of the little red flower. In both countries (and possibly elsewhere) the poppy is trademarked, by the Royal Canadian Legion (RCL) and the Royal British Legion (RBL), respectively, both respected veterans organizations. The Royal Canadian Legion’s website notes that the trademark was conferred by Act of Parliament in 1948, and is limited to the use of the poppy in remembrance;

The Canadian trademark for the Poppy includes both the Legion’s Poppy logo, as well as the Poppy symbol, as it relates to Remembrance. The trademark does not apply to the use of the actual Poppy flower, unless that usage is misrepresented as the Legion’s Poppy by associating it with remembrance or the raising of monies for a purpose other than the Poppy Campaign.”

However, the trademark extends to any colour or configuration of the poppy when used as a symbol of remembrance. This is increasingly relevant as various groups make their own versions available, from a white poppy symbolizing peace (some would say pacifism) to a rainbow-hued LBGTQ poppy that has caused some controversy.

Whether either of the Legions would take legal action against someone for producing and selling poppies of a colour other than red is an interesting question, but last year in Britain a seller at an outdoors market pleaded guilty to selling red poppies that had no association with the RBL. In Canada a group of knitters who were knitting poppies for the price of a donation, which they say they intended to give to the RCL, were reminded that they were violating the Legion’s IP. In addition to its statement of trademark, the RCL has a very clear copyright warning on its website;

“The material on this site is covered by the provisions of the Copyright Act, by Canadian laws, policies, regulations and international agreements. Reproduction of materials on this site, in whole or in part, for the purposes of commercial redistribution is prohibited except with written permission from The Royal Canadian Legion…”

And what are the materials marketed on the site? Just about anything that you can stick a poppy symbol on—playing cards, bags, baseball caps, pins, brooches, watches, T-shirts, magnets, umbrellas, scarves, toques, mittens, stuffed animals, even cellphone cases. You get the idea. And then there are digital versions of the poppy that you can purchase and use to embellish your Facebook page. All the proceeds go to the Legion and then on to its veterans’ welfare programs (although the Legion is not a registered charity). Nevertheless, the Legion IP monopoly on the poppy symbol (when used in connection with remembrance) has not been without its critics. Sean Brelyea, a former air-force officer and frequent commentator on veteran’s affairs, has argued that the Legion should no longer have exclusive control of the poppy symbol since it is not the only veterans’ organization in the country and, indeed, as the numbers of their veteran members has dwindled over the years owing to “natural attrition”, many of the Legion’s members have no association with veterans at all. Brelyea suggests that the Legion should licence use of the poppy symbol to other veterans’ organizations, with a concomitant sharing of the proceeds. I am sure that the RCL would argue that it alone is best placed to protect the meaning of the symbol and to ensure that poppies are effectively distributed so as to bring maximum returns. Last year (2018) over $16 million was raised.

Well, who knew that the innocuous looking poppy on people’s lapels on Remembrance Day carried such weighty IP concerns on its shoulders? When you drop your donation into the collection box, and pin your poppy on your jacket or shirt, remember…even the humble poppy can be controversial when it comes to trademark and copyright issues.

©Hugh Stephens