A quarter of a century ago, the internet was going to bring us wonderful things. The hubris of Silicon Valley was incarnated in the cyberlibertarian rantings of so-called internet guru John Perry Barlow and his “Declaration of the Independence of Cyberspace”. The hi-tech whiz kids actually began to believe in their own infallibility and in the innate goodness and benefits of cyberspace. They had drunk so much of the Kool Aid their eyes were orange. Any reasonable restraint placed on the internet was deemed to be a stifling of innovation and a violation of free speech. The internet was going to be liberating, democratizing, enhance creativity and our quality of life, spawn new industries and on and on. The constraints of the offline world, the laws and regulations that help balance society and keep excesses in check, were argued to be inappropriate for this new world.
Section 230 of the Communications Decency Act of 1996 was passed at the time, with lawmakers unaware how loosely it would be interpreted by the courts. This is the much-misused piece of legislation that internet platforms in the US have hidden behind to avoid any civil legal responsibility for malicious, defamatory and otherwise illegal content on their platforms, whether they are aware of that content, promote that content or refuse to take any action to delete or moderate that content. Section 230 has been widely recognized to be in need of amendment, despite the supposedly transformative role that professor and journalist Jeff Kossoff gives it in his recent book, “The Twenty-Six Words That Created the Internet”. Those 26 words (“No provider or user of an interactive computer service shall be treated as the publisher or speaker of any information provided by another information content provider”)–which constitute the formula that has immunized the platforms from civil liability for harmful material on their platforms–has been largely responsible for some of the most negative aspects of the internet.
Some of the internet hype back in the 1990s was true, but a lot was not. There is no question that the internet has brought about a huge transformation in the way we communicate, disseminate information, buy and sell, and even think. Unfortunately, as noted above, it has come with a lot of collateral damage. When this occurs, it is the role of legislators and regulators to mitigate the harm, to establish guardrails, to amplify the good while minimizing the bad, in short, to protect society, because we have learned that the industry cannot be trusted to do so on its own volition. We are now on the cusp of that situation when it comes to generative AI, including AI generated art.
Several commercial platforms that convert text prompts entered by users into images that resemble art or photography are becoming more widely available to the general public, as I discussed in this earlier blog on AI-generated art. This technology poses two major issues from a copyright perspective. First, to generate their images, the AI machines ingest without permission a vast number of images scraped from the internet, many of them protected by copyright. Without the raw material produced by human artists, the algorithms would not be able to generate anything that resembles human-produced art. Yet at least some of this AI-generated output is likely to compete in the market with the works of artists who unwittingly fed the AI machine. The unauthorized scooping of copyrighted content not only destroys any potential market for licensing content for data mining purposes, it also threatens to destroy the livelihood of many current artists, designers and illustrators. Whether this is legal is still to be determined. In the US there has been no clear fair use ruling that would permit this vacuuming up of copyrighted content. In other countries, text and data mining exceptions either do not exist in copyright law (as in Canada) or are restricted to non-commercial research purposes (as in the EU). As AI develops, the rights of creators must be kept in mind and protected.
A second copyright concern, at the moment largely theoretical but soon no doubt to be real, is the question of who exercises copyright over AI-generated works. It is accepted in almost all jurisdictions (the UK is an exception) that for a work to be granted copyright protection, it must be fully or partly created by a human, along with the other normal attributes required of a copyrighted work (originality, creativity and fixed in a material form). Even if AI assists in the creation of the work, there must be a “human hand” in control. In fact, the US Copyright Office recently questioned the copyright registration of a graphic novel, requiring the applicant to demonstrate that there was “substantial human involvement” in the creation of the work, after artist Kris Kashtanova went public stating that she had obtained a copyright registration on her fully AI generated graphic novel, thus creating a precedent. (She hadn’t). The USCO subsequently stated it would cancel her copyright and she is now appealing, claiming that she was the artist after all, albeit “assisted” by AI.
There is yet another copyright angle to creation of art through AI that has been explored by copyright blogger David Newhoff. He has examined the question of whether the crafting of “prompts” to generate art through AI could be considered sufficiently original and creative to qualify for authorship, and thus copyright protection. He is prepared to consider that a carefully constructed prompt might qualify as an original literary work, much as a line of code in a software program might qualify, but not the resulting work because that is the work of the algorithm, not the person creating the prompts. The Infolaw blog in the UK has also carefully considered this question. There are no straightforward answers.
While these copyright questions remain to be resolved, equally serious generative AI issues arise from the way in which the final work is used and the nature of the content it incorporates. Coming back to my original premise that the internet cannot be left on autopilot and expected automatically to become a force for social good, the same is true for generative AI, especially AI generated art. While generative AI has the potential to save time and money for business and creators and even improve content, there is also plenty of potential for abuse. This is surely going to happen unless the enablers of this technology establish some rules. You can easily imagine how the technology could be misused, producing everything from misappropriated personal images doing undesirable things to manipulated images putting people, including prominent people, in situations and places where they have never been, not to mention copyright infringement and a host of other abuses. There are already reports of videos of ISIS beheadings and non-consensual porn powering AI generated art. And the sex industry is no less present in the world of AI-generated content than it is in other facets of the internet as reported by TechCrunch, which profiled a new, AI enabled site devoted exclusively to porn, Unstable Diffusion.
Because of the potential for abuse, platforms such as OpenAI and Midjourney have built in guardrails, various filters and controls that will constrain (but probably not entirely prevent) misuse of generative technology. Google, which is developing its own platform, Imagen, is holding back public release for now until it can figure out how to ensure appropriate use. However, Stability AI’s platform, Stable Diffusion, which is open source and free to the public, launched with no such controls. According to the New York Times, Emad Mostaque, the founder and chief executive of Stability.AI, has pushed back on the idea of content restrictions, arguing that “radical freedom” is necessary to achieve his vision of a democratized AI untethered from corporate influence. Mostaque is quoted as saying, “We trust people, and we trust the community”. This is a disaster waiting to happen.
Does Mostaque’s attitude stem from naïvité, or just irresponsibility? I would call it both, plus arrogance and maybe some crass commercial advantage. This laissez-faire, “let it all hang out” attitude will inevitably result in “the community” (or fringe parts of it) abusing the technology, but it will also give Stability AI some additional market share. Even though some platforms have tried to set guardrails, the nature of the industry is that, in the end, the standard is set by the lowest common denominator. Already OpenAI is modifying some of its early controls as a result of Stability AI’s lack of discipline. The inevitable abuses may result in the hammer of legislation being brought down, despite Mostaque’s ramblings objecting to control by big tech, or governments. Generative AI won’t just stop at still images. Mostaque is already talking about extending it to video, audio and other formats. Music is the next creative industry to face the AI challenge. Given the speed at which this technology is developing and being rolled out, with very few controls and limited screening of content, the law of unintended consequences will be quick to assert itself.
If AI platforms have no responsibility for the content they enable users to create, just as internet platforms enjoy immunity in civil law from the consequences of the content they enable users to distribute, the benefits of the technology could well be outweighed by its negative side-effects. The latest AI-enabled application to cause concern is Lensa, a Stable Diffusion powered tool that takes your selfies, retouches them and produces an art image with a version of you. Teenagers love it. But not only are users giving away their facial data, which could be misused, the images that Lensa generates from the Stable Diffusion database are often overly sexualized or misogynistic.
Despite all the concerns, there were some signs at the end of last month that the “responsibility” message may be starting to seep through, or at least awareness that there could be some legal consequences. In November, Stable Diffusion tweaked its model to remove the ability to mimic specific artists or to generate nude and pornographic content and realistic portrayals of celebrities. Perhaps Mostaque’s unlimited trust in people and “the community” was tempered by a dose of reality? Yet when there was pushback from parts of “the community”, Mostaque responded that because Stable Diffusion is open source, users are free to add these features back at their discretion. It seems he is more concerned with Stable Diffusion’s potential legal exposure than the impact of unrestrained applications of AI.
Some clear ground rules need to be established now, before things get out of control, rather than waiting for the inevitable abuses to occur. If industry won’t do it—and industry won’t because there will always be a new disruptive upstart seeking to gain misguided competitive advantage by ignoring the rules—then governments will have to. Even though the US Congress has been unable to get its act together to reform Section 230, governments around the world are finally waking up to the need to control online abuses. Introduction of online safety legislation is underway in the EU, UK, and Canada. Australia already has such legislation. New Zealand has developed a voluntary code. Abuses from AI generated images will have to be addressed within this framework.
If we have learned any lessons from the cyberlibertarian philosophy of some of the early protagonists of the internet (a philosophy still espoused by some, such as the Electronic Frontier Foundation), it is that the rules, incentives and punishments established by society to guide the behaviour of some for the benefit of all, apply every bit as much to the internet, cyberspace and AI as they do to other aspects of life. If, as Mostaque claims, “people” and “the community” could be trusted to do the right and sensible thing, we would have little need for the apparatus of the state other than to deliver services. Sadly, that is not how life works.
AI-generated art is on the cutting edge of the next wave of technology-enabled abuse unless action is taken now to place some responsibility on the shoulders of the enablers. If you invent a weapon, load it, hand it to an anonymous user and show them how to fire it, you must take some responsibility for the results. We have seen it happen already with misinformation and abusive content on the internet. Generative AI is the next big challenge. Let’s make sure this time we get it right.
© Hugh Stephens, 2022. All Rights Reserved.