The platform for exchange of ideas and expression provided by the internet is, as I think everyone today now knows, a two-edged sword. It can empower thinkers and promote freedom, but it can also provide a platform for anonymous spewers of hatred and misinformation in all its forms. The question of who is responsible for managing this powerful tool, and the role of governments in ensuring responsible management, is at the heart of online safety (aka online harms) legislation in various countries. Australia and the EU have taken the lead. Australia passed an Online Safety Act in 2021, targeting cyber-bullying, image-based abuse and promotion of terrorism. The EU has enacted a Digital Services Act, which is slated to come into effect in early 2024. The UK is grappling with a draft Online Safety Act. In the US, regulatory efforts are lagging although a Kids Online Safety Act is currently under consideration in Congress but it only relates to a limited number of harms aimed at minors.
In Canada, a first effort which was presented for public consultation in 2021 was pulled back in the face of widespread criticism that it was too broad and would stifle free speech. If the internet is a two-edged sword, so is the idea of regulating speech on the internet. Most responsible people recognize that some limits are necessary, but equally there is concern about where the line is to be drawn, and who gets to make those decisions. When does combatting harmful speech (harmful speech can be both lawful or unlawful) cross over into censorship? While decisions over what to proscribe and what to allow, even if hurtful or damaging to some individuals, is complex and controversial, the situation becomes even more complicated and difficult to combat when state actors misuse the internet to advance their own political agendas. As the Canadian consultation document on Online Harms noted,
“Online platforms are increasingly central to participation in democratic, cultural and public life. However, such platforms can also be used to threaten and intimidate Canadians and to promote views that target communities, put people’s safety at risk, and undermine Canada’s social cohesion or democracy.”
This problem was highlighted in this year’s annual report of PEN Canada, “Digital Transnational Repression”. PEN Canada, for those of you who don’t know, is the Canadian chapter of PEN International, the London-based organization dating back to 1921 that advocates for freedom of expression for writers and journalists globally. The report highlights the cases of four expatriated women who now live in Canada, from Iran, Turkey and China.
Farah (not her real name) was a university student in Iran who was arrested and prosecuted by the Iranian authorities for human rights activities before she sought exile in Canada. Maryam Shafpour, also from Iran, was a human rights activist and blogger who spent two years in prison on charges of anti-regime propaganda, who came to Canada after her release. Arzu Yildiz was a court reporter in Turkey (aka Turkiye) who fled after an arrest warrant was issued alleging terrorist propaganda because of her social media postings about torture of suspects arrested after an aborted coup in 2015. She eventually sought refugee status in Canada. Sheng Xue left China after the 1989 TianAnMen Incident and has lived in Canada since then. She is active in leading human rights activities in support of dissident and minority groups in China. What all these women have in common is that they have been targets of digital transnational repression, in other words they have been victims of online harassment and threats perpetrated, directly or indirectly, by state actors.
According to PEN Canada, Farah has been the target of smear campaigns on social media, including false allegations regarding her sexual behaviour and fabricated videos and photos intended to defame and harm her reputation. Maryam has also been attacked with posts and videos telling fabricated stories about her sexual, financial and political activities. Attempts to ruin reputation and credibility seem to be a prime aim of many misinformation campaigns. Arzu, labelled a terrorist by the Turkish government, wants the opportunity to clear her name in a Canadian court. Sheng Xue has also been the target of online campaigns to try to destroy her reputation. In the pre-internet era, the threats were by phone or mail. Abuse is conducted through Groups on WeChat, China’s most popular social media platform. Fake photos have been posted on Twitter. WeChat has been in the news in Canada lately as a result of a report from Global Affairs Canada that Conservative MP Michael Chong was the target of a concerted campaign on WeChat to discredit him during a recent by-election in May. The Global Affairs report stated that;
“Between May 4 and 13, 2023, a coordinated network of WeChat’s news accounts featured, shared and amplified a large volume of false or misleading narratives about Mr. Chong. Most of the activity was targeted at spreading false narratives about his identity, including commentary and claims about his background, political stances and family’s heritage.”
The report noted that it could not be proven unequivocally that China ordered and directed the operation while indicating that China’s role in the information operation is “highly probable”. It also stated that the false narratives violated WeChat’s user code of conduct but there was no indication the platform had applied its own content moderation standards. This is not a surprise. To get WeChat to call out a Chinese government or United Front covert activity on social media is about as likely as Donald Trump admitting that he lost the 2020 Presidential election.
Research on the forms of digital transnational repression reported on by PEN Canada has been led by the University of Toronto’s Citizen Lab, an interdisciplinary “laboratory” focussed on research related to information and communication technologies, human rights, and global security. While NGOs like the Citizen Lab can call out abusive behaviour by non-state and state actors, victims have very limited recourse. Attempts by the four women featured in the PEN Canada report to have the authorities take action inevitably ended nowhere. Where does one turn? Local police forces are basically not interested, or do not have the resources or knowledge to do anything. Inaction is justified on the basis of protecting freedom of expression. Security and intelligence services are not much better at providing support, if you can even get them interested. It takes a victim with the profile of Mr. Chong (who was the target of Chinese political interference during the last general election, leading to the recent expulsion of a Chinese diplomat) to get anyone’s attention. The laws of Canada, and most democratic countries, are not fit for purpose when it comes to combatting online harms and hate speech, let alone personal attacks and misinformation directed at individuals whom the overseas regime wishes to silence or discredit.
Will Canada’s eventual Online Harms legislation deal with this problem? Platforms already can be compelled to remove defamatory content if they have knowledge that the content meets the legal definition of defamation, as Google recently found out when it was fined $500,000 by a Quebec court for refusing to remove listings about a person wrongfully accused of pedophilia. Google had argued that language in the new NAFTA Agreement, the USMCA/CUSMA gave it equivalent protection to that which it enjoys in the US under the notorious Section 230 (of the 1996 Communications Decency Act). This is the much-abused US legislation that has allowed internet platforms to avoid any civil liability for harmful content made available on their services. At the time of the conclusion of the USMCA/CUSMA I argued that Section 230 did not apply in Canada, a conclusion that the recent court decision has validated. However, to force platforms to block defamatory content requires time and deep pockets. Given the platforms general unwillingness to comply, and to fight in court every attempt to make them do so, success in forcing them to belatedly act comes only to the persistent and well-heeled person, who is usually forced to go to court to force the platforms to do the obvious right thing.
Online Harms legislation will put greater onus on the platforms to monitor for harms and to take corrective action, according to set of defined criteria. But how broad or how narrow should the criteria be? Here again is the perennial problem of the two-edged sword. Even PEN Canada, which vigorously advocates for protection for writers and journalists, and which has featured the problem of digital transnational repression, does not have an agreed position on the legislation. In its annual report, it states;
“PEN continues to monitor the (Canadian) government’s efforts to introduce legislation to address misinformation and online harms. The freedom of expression issues in regulating online communications are obvious, but cut both ways. We see how bullying behaviour silences voices in the digital commons, to the detriment of free expression, but also recognize the dangers of government regulation of online speech…PEN believes that action is needed to address online harms which are doing so much damage to our democracy. But we believe we should move with caution, transparency, and sensitivity to the public’s right to freedom of expression, to avoid threatening that which we are trying to protect.”
That pretty well sums up the dilemma.
When state actors abuse the guarantees of freedom of expression built into democratic societies, it is an unfair fight. They take advantage of freedoms of communication that they themselves deny their citizens, and use those freedoms to target, intimidate and possibly silence those who speak out against repression through personal attacks, misinformation, innuendo, and so on. The police, the platforms and even the judiciary are not of much help in protecting against this form of abuse. Online safety or online harms legislation needs to find a surgical approach (using a scalpel rather than an axe) to address these forms of abuse while protecting the underlying principles of freedom of expression.
© Hugh Stephens, 2023. All Rights Reserved.