EXTRA A trio of artists last week filed a lawsuit in U.S. District Court in San Francisco against the developers of Stable Diffusion, Midjourney and DeviantArt, charging them with copyright infringement for scraping millions of images from the internet and using them without permission to train their artificial intelligence-based image-generating software. The complaint, which asks the court to certify it as a class action, also charges the defendants with unfair competition and violating the plaintiffs’ right of publicity under California law for advertising their AI’s ability to create works “in the style of” named artists.
On the flip side of that coin, AI developer Dr. Stephen Thaler filed suit in Washington, DC, against the U.S. Copyright Office, also last week, asking the court to, in effect, order the office to assign a formal copyright to a work produced by Thaler’s Creativity Machine, after the office repeatedly had refused to register the work. That complaint argues the “plain language and purpose of the Copyright Act” is at odds with the Copyright Office’s insistence that formal copyright protection requires human authorship and that works autonomously generated by computers are not eligible for registration.
Then this week, stock photo giant Getty Images issued a statement announcing it, too, had “commenced legal proceedings” against Stable Diffusion-developer Stability AI over the unlicensed use of “millions of images” belonging to Getty in its training dataset.
So, the legal battles are now fully joined, over what goes into an AI image generator, and what comes out of it.
I have a feeling we’re going to be here a while.
It’s been clear at least since the beginning of 2020, when the World Intellectual Property Organization published the comments it received as part of its public consultation on the implications of artificial intelligence technology for intellectual property policy and practice, that the use of copyrighted works in the data sets used to train AI systems would be among the first major flashpoints in the technology’s development. While the litigation against image generators may be new, the debate over whether works produced by AI systems trained on copyrighted works should be considered derivative works under copyright law, and for which a license therefore should have been obtained, is not new.
As for Dr. Thaler, last week’s lawsuit also may be new, but his beef with the Copyright Office is not. He originally filed an application to register a work he claimed was “Created autonomously by a machine,” in November 2018. He also sought to be named the author of the work, titled “A Recent Entrance to Paradise,” and the rightful owner of the copyright, under the work-for-hire doctrine. But the Office rejected the application, stating the work “lacks the human authorship necessary to support a copyright claim.”
Thaler then filed two requests for reconsideration, the first in September 2019, and again in May of 2020, both of which were similarly rejected by the Copyright Office.
Thaler also tried a similar gambit in the U.K. and Australia in 2019, seeking a patent for a device he claimed was autonomously invented by another of his AI systems. He briefly found success in Australia after the Federal Court of Australia overturned a decision by the Australian Patent Office to reject the application in 2020, only to see that ruling reversed on appeal in 2021.
I cannot speak authoritatively on the legal merits of Thaler’s arguments. But the commentary I’ve seen from those who can speak authoritatively seems to converge on the view that his lawsuit against the Copyright Office is likely to fail, given existing caselaw concerning non-human authors and the Supreme Court’s requirement of a modicum of “creativity” for a work to be considered original.
I’m not sure Dr. Thaler actually expects anything else, however. The legal fights he’s picked with copyright and patent authorities seem engineered as much to highlight the increasingly quaint-looking foundations of the legal system’s traditional conception of authorship and creativity in the age of autonomous machines, as for any financial or proprietary gain.
That’s not to say the financial implications aren’t huge, however. Anything but.
The astonishing advances made in AI technology over the last two years, and the commercialization of tools like Stable Diffusion and OpenAI’s ChatGPT, have yanked the discussion around AI authorship and the use of copyrighted works in training data out of the largely theoretical and academic realm of the 2020 WIPO proceeding and thrust it into the big money worlds of major media conglomerates and Wall Street. With head-snapping speed.
As recently as 2019, OpenAI was valued in tender offers at roughly $14 billion. This month, it’s in talks with investors about a new tender offer at a valuation of $29 billion. Microsoft alone, which invested $1 billion in OpenAI in 2019, is looking to invest $10 billion more and to integrate ChatGPT into nearly all its products.
AI tools increasingly are found on Hollywood sound stages and in post-production suites while Apple and Google have each announced plans to use AI-generated digital voice narration to create millions of new audiobooks.
In other words, it’s no longer a question merely of how or whether artificial intelligence could (or should) produce economically viable products within fields traditionally regarded as creative. News articles, blog posts, advertising copy, still and moving pictures and sound recordings are today all being produced, at scale, and with varying degrees of autonomy, by AI systems. They will continue to be produced, with ever-great fidelity to what humans historically have produced. And for better or worse, the market will determine their value.
The urgent questions now are the terms on which that production should occur, and who should reap the rewards of what is produced?
With real money now at stake, disputes and further litigation inevitably will arise. And disputes over copyright (and patents) are seldom quick and easy to resolve. Litigation typically turns on highly fact-specific analyses of the case at hand. The statutory language framing fundamental concepts like fair use in intentionally non-specific, leaving courts to do much of the heavy lifting of determining their contours.
The statue refers to “original works of authorship,” but does not define “original.” The Supreme Court, in Feist Publ’ns, Inc. v. Rural Tel. Serv. Co. (1991), said the originality requirement “means only that the work was independently created by the author (as opposed to copied from other works), and that it possesses at least some minimal degree of creativity.” But it did not define “creativity,” or “author”, or as Thaler reads it, explicitly restrict either to humans.
As the late, great Dodger announcer Vin Scully used to say, “pull up a chair, we’re just getting started.”
1 Comment
Comments are closed.