Copyright and the Inveterate Diarist

Source: http://www.shutterstock.com

I will confess to having kept a diary for many years. What started out as a way to keep track of holidays and so on became a persistent habit that I have been unable to break to this day. It’s not as if I go back and re-read them regularly, but a diary is enormously helpful in settling disputes over where we were, and when, at any given time in the past. Aha. I knew it was 2015 and not 2016 when we saw Aunt Maud. And, of course, it brings back the immediacy of whatever was happening that day. It is sometimes surprising what gets into a diary (an overcooked steak on a barbeque for example) while world events can go sailing by with scarcely a mention. (That is what newspaper archives are for).

The other day I came across an article in my local newspaper titled “Do you know where your diary is?”. The author, a self-described “amateur sleuth”, wrote about a diary museum she had discovered in Italy, the Piccolo Museo del Diario. The author mentions that the Little Library Museum is part of Italy’s National Diary Archive, home to over 10,000 fragments, memoirs, photos and archives. Most are from “ordinary people”, which is the case all over the world. There are some famous diaries that have been published, such as the Diary of Anne Frank, but most diaries remain unpublished, and many are simply a monologue between the diarist and the daily blank page. Diaries can be very personal, revealing confidences, as well as factual on-the-ground reports of what actually happened from the perspective of one observer, or they can deliberately omit unpleasant facts and rewrite events of the day to suit the fancies of the diarist. Most are not intended to see the light of day, but what happens when you move on? Will someone else read them? Will that person decide to publish them, or excerpts of them? What if they are publicly displayed in a museum, like the Piccolo Museo del Diario? Who really owns the content of a diary?

You will probably not be surprised to know there is a copyright element to this puzzle. Whether published or unpublished, the diarist will hold the copyright to the contents of the diary for it was he or she who created it. Whatever they wrote is an original form of expression of various ideas, and it exists in a tangible form. Voilà. All the essentials for copyright are there. That copyright will persist for a number of years after the death of the writer, for a full seven decades in the case of most western countries today. The diaries themselves could be sold–this might happen if they were the work of someone famous–but even under these circumstances the purchaser does not hold the copyright. The copyright holder controls the right to reproduce and distribute excerpts from the diary. Most posthumous copyrights are held by the heirs to the estate of the diarist, which facilitates the creation of family histories. However, as one professional archivist has noted, in a blog post on Copyright Fundamentals for Family Historians,

“I may have a diary my mother wrote, but she still retains the copyright of its contents. If I wanted to publish this diary, in whole or in part, I would need to ask permission first.”

And if your mother is not around to ask, unless you are the designated heir to her estate, you would need to ask permission of whoever that is, or, if you are sharing the estate with someone else, a sibling for example, you would need to get that person’s permission to publish. Many diaries, however, are in the public domain, their copyright having expired. So, if you are going to quote from the letters of Pliny the Younger describing the eruption of Vesuvius and the destruction of Pompeii in AD79, you are on firm ground.

Letters are also protected by copyright, as Megan Markle, Duchess of Sussex, clearly proved with her successful lawsuit against the British tabloid Daily Mail for publishing excerpts from her private letters to her father. The Mail was ordered to pay substantial financial remedies to offset Markle’s legal costs for violating her copyright. A separate action regarding violation of privacy also went against the Mail although the award for damages in that case was symbolic.

If you don’t want your diaries read or worse, published, after you are no longer here, then burn them. Otherwise, your heir or heirs will call the shots. Moreover, at some point in the future, when your copyright has expired, they will be in the public domain and freely publishable by anyone. This is assuming that anyone at that stage would actually be interested in publishing your catalogue of daily doings and impressions. It’s most unlikely. My advice to diarists: Write for yourself, not the future.

© Hugh Stephens, 2023. All Rights Reserved.

Bill C-18 (The Online News Act): Does Flexibility and Dialogue Represent “Compromising” or “Caving”?

Source: http://www.shutterstock.com

When does “compromising” to find a solution that works for both parties become synonymous with “caving” (in)? Only when you take a perverse delight (one might even call it schadenfreude) in gloating, “I told you so”, when the Canadian government’s policy initiative (C-18, the Online News Act) runs into opposition from the giant internet platforms at which it is aimed. The Online News Act (ONA) requires the behemoth internet platforms Google and Facebook (Meta) to contribute financially to news media organizations when they use (make available) news content on their platforms. To no-one’s surprise (because it will cost them some money), Google and Meta don’t like the legislation and have pulled out all the stops to oppose it, as they did with similar legislation in Australia. This includes threatening to block Canadian news content on their platforms, including already taking preliminary steps to test blocking processes for a random selection of Canadian consumers.

A compromise involves making mutual concessions to reach an agreement between two parties. Caving (in) usually means abject surrender. It is clear that the current standoff between the government and the platforms will only be resolved through compromise on both sides, as I pointed out in a recent blog posting. (When the Irresistible Force of the Canadian Government Meets the Immovable Object of Meet and Google: What Happens Next for the Online News Act?).

To recall, for those who have not been following all the permutations of a government’s attempts to corral these two internet giants and encourage them to negotiate “fair compensation” for their use of news content produced by others, there are several possible outcomes. One is that the platforms fail to bargain with news providers in good faith, using their massive economic leverage to dictate terms. In that event, the legislation will be triggered and “final offer arbitration” will be invoked through the establishment of an arbitral panel. Under final offer arbitration, the panel (which will be selected by the two parties based on a roster provided by the CRTC) will select one of the two final offers put forward by the opposing parties. The intent of this process is to level the playing field between the internet giants and small media players, and encourage negotiation of balanced agreements, since absent good faith negotiations the ultimate step could be arbitration.

A second outcome is that the platforms and news media organizations reach agreements that both sides consider fair. That is what happened in Australia. This outcome is truly win-win and is the whole point of the legislation. The platforms have already reached agreements with some media organizations, although they are now threatening to terminate these agreements because they don’t like the way the Online News Act is framed, just as they did as part of their pressure tactics in Australia. (Google in Australia: Dangle the Carrot and then Yank it Away).

A third outcome, and the one that is currently being brandished by the platforms, is to comply with the ONA by opting out, that is, not using, or making available, any news content on their platforms. If they are not using news content, there is no legal obligation to pay for it. Since the Act only applies to “designated news intermediaries” (all of which are Canadian), the platforms can, in theory, comply with the Act by ensuring that consumers in Canada are unable to access Canadian news content. For example, when consumers search “Forest fire warnings” on Google, no Canadian news sources will come up. Canadians might be able to find some news on Google about forest fires in their region from US sources (although this is unlikely), but local news will be blocked, and the utility of the search engine will be compromised. What will consumers do? They will have to access another search engine, such as Bing, Yahoo or DuckDuckGo, or go directly to trusted news sources. These alternative search engines may not be the first choice of consumers, but they will produce the information needed, in the process familiarizing users with products other than those produced and controlled by Google. Moreover, by selectively blocking a range of search requests from users in just one (fairly large) country, Google will be compromising the algorithmic settings it uses to rank search requests. What a triumph for Google! It will have alienated a large consumer base, damaged both its reputation and technology, put the public potentially at risk and entered into headbutting with a G7 government that has various means to deal with recalcitrant companies (such as a Digital Services Tax, long under discussion). Why not find a solution that avoids all this? It is called “compromise”. Google and Meta need to compromise, just as the Canadian government needs to demonstrate some flexibility if both sides want to reach a win-win outcome.

Google has concerns about what they consider to be potential unlimited liability. They also want to be able to include in the calculation of compensation the agreements they have already reached with media organizations. They would like some assurances that if they bargain in good faith, they will be exempted from the application of the legislation. (They achieved the same outcome in Australia, earning de facto exemption because they were able to reach sufficient agreements with Australian media outlets). This strikes me as not unreasonable. In response, the government, in preparing for the drafting of regulations under the Act, has moved to clarify various issues through the recent release of a bulletin (The Online News Act: Next Steps). In this document, the Heritage Department clarifies that it will establish a threshold for contributions to the sustainability of the Canadian news marketplace, based on a platform’s estimated Canadian revenues, (i.e. no unlimited liability), reaffirm language already in the Act that non-monetary offerings to news organizations such as training be included and that existing agreements with news businesses can be included in the determination of fair compensation, and to clarify the definition of what constitutes a “significant portion” in terms of the number of agreements the platforms are required to reach with various categories of news organizations in order to gain an exemption. These are reasonable steps to address concerns expressed by the platforms, while maintaining the policy objectives of the legislation. For the record, this is called compromise, not caving.

As for Meta (Facebook and Instagram), judging by its confrontational rhetoric, this company may have decided that it is going to be an outlier, taking the news blocking route rather than being willing to compromise. Meta claims, correctly, that some news organizations post their own material to the platform and argues that it should not have to compensate news organizations when the post is at the initiative of the news organization doing the posting. This, however, is a red herring and has been seized upon by opponents of the legislation to try to discredit it. There is no provision in the legislation for compensation based on a specific number of links or clicks on links. Rather it is an overall process that takes into account the respective benefit enjoyed by news organizations from having their content on Facebook (and Google) and the offsetting benefit derived by the platforms from using news content to attract and retain viewers (thus increasing their exposure to online ads). If Facebook blocks users from posting news items, it will make itself a less valuable platform and forum for consumers. Ultimately there will be market consequences. Reaching reasonable agreements with news providers would be the logical way to proceed, but ultimately if Meta wants to shoot itself in the foot, it has the ability to do so. In Australia, after doing exactly that (Facebook in Australia—READY, FIRE, AIM), it decided that a wiser course of action would be to negotiate with news providers, thus avoiding designation under the Australian legislation.

I still believe there is time (around six months), the will (on the part of the government and possibly both platforms), and the means (drafting, consulting on and clarifying regulations) to reach a solution that will work for the platforms, the government and news media. Canada will not be the last country to try to ensure that the big platforms make a reasonable contribution to the news content that they freely use. The parlous state of journalism, one of the pillars of any functioning democracy, is a primary reason why several similar initiatives are taking place in various countries. The platforms would do well to come to an agreement with the Canadian government that they can live with–even though it means drawing on their resources to some extent beyond their present “voluntary” commitments–in order to show that reasonable compromise is possible. If the platforms get to “yes” under the Canadian legislation, they will not have “caved” any more than the Canadian government has caved; rather they will have found a way to meet their objectives (and maintain their business models) through dialogue and a dose of compromise. Wine with a little water in it is better than no wine at all, for both parties.

© Hugh Stephens, 2023. All Rights Reserved.

International Regulation of AI Development and Application: Is it Feasible?

It seems that every day a new report emerges regarding concerns with artificial intelligence (AI) and how it will likely impact our lives. There have been dire suggestions that unless something is done, one day AI will take over, resulting in the end of humanity. There have equally been suggestions that, as with other technological advances in the past, AI will be accepted, normalized and incorporated into daily life. Society will adjust. Increasingly, however, there seems to be a recognition that there is an important role for government to play in creating the playing field on which AI will operate. In other words, establishing the rules of the game in order to control the worst excesses of unregulated AI.

A major question is how stringent those rules should be, and how to harmonize them between and among governments. Just as competition between companies is rapidly driving AI development, with some enterprises trying to gain competitive advantage by releasing AI applications early in the development phase while exerting very little control in the form of filters or limitations, equally there is a risk that among governments there will be a race-to-the-bottom as competing jurisdictions seek to present themselves as the most “innovation-friendly” in a bid to attract jobs and investment. The solution is likely some form of international regime to set minimum standards and ensure transparency. The situation today in some ways resembles the copyright free-for-all that existed in the 19th century when it was every nation for itself. In the end, the benefits of universal standards and application were recognized, leading to the first international copyright convention, the Berne Convention of 1886. Since then, there have been many international agreements covering a wide range of fields, with international institutions growing out of the UN system.

Arguments have been made by AI industry leaders, like Sam Altman of OpenAI, for the creation of an international entity like the International Atomic Energy Agency (IAEA), to manage AI. (Others have argued that the IAEA model is not appropriate). Any form of internationally agreed regime is a far-reaching goal which, if it ever happens, will be preceded by more limited governance arrangements between leading creators and users of AI. Groups like the G7 and G20 have already issued statements on AI.  The G7 Leaders’ Statement issued at Hiroshima in May, included as one of its key objectives the intentions of the member states to “advance international discussions on inclusive artificial intelligence (AI) governance and interoperability to achieve our common vision and goal of trustworthy AI, in line with our shared democratic values.” To give effect to this, a working group will be established in cooperation with the OECD and the GPAI (Global Partnership on Artificial Intelligence) for discussions on AI before the end of 2023. These discussions could include topics such as “governance, safeguard of intellectual property rights including copy rights, promotion of transparency, response to foreign information manipulation, including disinformation, and responsible utilization of these technologies.”

The GPAI, referenced above, is a self-described “a multi-stakeholder initiative which aims to bridge the gap between theory and practice on AI by supporting cutting-edge research and applied activities on AI-related priorities”, comprising 28 member states plus the EU. There is no shortage of “talk-shops” to move toward some form of international regulation. When and to what extent these fora will succeed in producing some binding principles or rules remains to be seen. Until that happens much of the action will be at the national or regional level.

The EU is already taking a first step with the approval this month by the European Parliament of the AI Act, the first step in a long process to bring legislation controlling AI into force. It will need to clear the Trifecta process, meaning it needs to be approved not only by the European Parliament but also the Commission and EU Council, and there will likely be changes as it goes through the process. The current version incorporates not only controls on outputs but also deals with inputs, requiring firms to publish summaries of what copyrighted data is used to train their tools in order to address the issue of unrestrained scraping of websites containing copyrighted content.

Outputs and inputs are the two big issues, with most of the attention to date being focussed on the former. When it comes to controls on AI generated content, concerns have been expressed about AI’s potential to incorporate racial or gender-based bias into algorithms, or to spread misinformation or disinformation. It has also revolved around how AI is used in applications that could violate personal privacy, such as facial recognition technologies. These are all very legitimate concerns that are addressed in draft legislation being discussed in a number of countries, including the EU’s AI Act, but to date insufficient attention has been given to the questions of AI inputs. The EU Act is one of the first to specifically address this dimension.

(Canada’s Artificial Intelligence and Data Act, abbreviated as AIDA, is long on potential output problems but barely touches on the input question with the exception of a suggestion that AI systems should keep documentation regarding how requirements for design and development have been met and to provide documentation to users regarding datasets used).

The question of whether AI technology developers are free to help themselves to any and all data that they can access on the internet is far from settled. In the US, there are those who loudly proclaim that accessing data (including copyrighted and proprietary content) to feed AI algorithms is fair use, but that is a questionable assertion and remains to be decided legally. A number of countries have copyright exceptions for text and data mining (TDM) but there are a number of limitations in these cases, for example requiring that the output be used only for research and non-commercial activities. (Canada does not currently have a general TDM exception). Also to be taken into account is the fact that licensed databases for use in AI development are available in a number of content areas, such as publishing or news content, an argument for limiting the “help yourself” process currently being practiced. And then there is the issue of what data should be excluded, for example, personal health records.

One tool to manage the unlicensed content being used to feed AI machines is to require that tech companies maintain and publish transparent data on the material they have accessed, allowing rights-holders and individual citizens to identify whether and to what extent their content or personal data is being used without permission. While such information could be used to allow rights-holders to opt out of having their content accessed, the onus nonetheless should be on those taking the content to secure permission in advance. Even if a documentation requirement is implemented, a great deal of damage has already been done because existing AI models have already ingested vast quantities of content, almost all of it unauthorized.

While the EU is in the forefront, many jurisdictions are grappling with the AI issue. In Britain, the Sunak government is seeking to stake out a claim for the UK as a leader in AI regulation while making it a “go to” place for AI investment and development. Having “freed” itself from the “shackles” of EU membership, the UK wants to portray itself as a responsible middle ground between EU regulation and the still largely unregulated free-for-all that exists in the United States. (It’s not that Congress is not aware of the range of issues presented by AI. The question is whether anything is attainable in the current highly partisan political atmosphere.) Sunak has proclaimed that Britain will host a summit on AI regulation in the fall (focussed on safety), and has got the Biden Administration to agree to participate.

A coordinated approach among governments is essential to avoid the “weakest link” problem. The weakest link would be quickly exploited by some AI developers to gain an unfair advantage. This is already happening with respect to text and data mining (TDM) laws where some countries, Japan being one example, are being accused of throwing creators and rights-holders under the bus in the name of promoting innovation. (There is, however,  some question about how loose the Japanese TDM exception actually is). Singapore is another country where content concerns seem to play a very quiet second fiddle to the cacophony of tech interests. In the UK, initial misguided proposals to implement a limitless TDM exception, removing any possibility of licensing content for TDM purposes and in effect allowing expropriation of copyrighted content, were reversed after concerted pushback from the UK creative community.

It will not be easy for competing jurisdictions to hammer out a framework that will gain wide acceptance, particularly with the US, a global leader in innovation, being slow to the regulatory party. Developing nations with different governance models, like China, also need to be considered. China recently announced new measures to regulate generative AI and although the motivation is more to strengthen state control than to promote democratic values, according to the Carnegie Endowment for International Peace, “many of the regulation’s proposed measures map neatly onto the AI principles described in the consensus international AI policy instruments that are often sold as affirming democratic values, such as transparency, respect for individual rights, fairness, and privacy.” Perhaps that provides some hope for the achievement of an international agreement on use of AI, while allowing signatories a degree of flexibility in how the provisions are applied and interpreted. (After all, that flexibility is what allows many UN instruments to work today). What is clear is if there is to be an international agreement, while it may begin with “like-minded” states, eventually it will have to move beyond the usual suspects, especially to include countries where widescale AI development is likely to occur.

International regulation and agreement on common standards will not be an easy process. But the fact that so many international forums and national governments are seized of the issue is indicative of the recognition that now is the time to get ahead of the problem. So far, governments are playing catch-up, and that poses plenty of challenges. While individual jurisdictions will still jockey for advantage, there seems to be a growing acceptance that a series of regulatory silos is not going to serve national or global interests. How quickly that will be translated into concrete action is the big question. At the end of the day, some form of international coordination, if not regulation, is essential.

© Hugh Stephens, 2023. All Rights Reserved.

%d