Early Setback for Artists in Lawsuit Over Generative AI

The first group of artists to bring a lawsuit for copyright infringement against Stable Diffusion-developer Stability AI, along with Midjourney and DeviantArt, suffered a setback in court last week when a federal district court judge in San Francisco indicated he is likely to dismiss most of the charges in the case, albeit while leaving the door open for the plaintiffs to file a new complaint.

At a hearing Wednesday (July 19) District Judge William Orrick said he is “inclined to dismiss most of [plaintiffs Sarah Andersen, Kelly McKernan, and Karla Ortiz’s] claims without prejudice,” meaning they can amend their complaint to address the deficiencies cited by the defends in their motions to dismiss and refile their lawsuit.

 Of the three named plaintiffs, only Andersen had registered the allegedly infringed work with the Copyright Office, and thus “asserted a cognizable claim of direct infringement against Stability AI for copying her work at the ‘input’ stage,” Orrick said.

The court has yet to issue a formal written opinion in the case, so nothing it certain. But in his comments from the bench, Judge Orrick put his finger on one of the main hurdles faced by artists, as well as by photographers, writers and other creators who have filed, or threatened to file lawsuits over the unauthorized use of their work in training AI models, discussed here in previous posts: the problem of attribution.

According to Orrick, the plaintiffs should be able to “provide more facts” about the alleged copyright infringement because they have access to Stability’s relevant source code, which is also contained in Midjourney’s and DeviantArt’s code base.

Absent that, he said, it “seems implausible,” the plaintiffs’ specific works were significantly involved in constructing Stable Diffusion’s model given that it was trained on more than 5 billion images.

The judge also seemed to pour cold water on claims that the output produced by the image generators could be infringing copies of images used in training.

“I don’t think the claim regarding output images is plausible at the moment,’ he Orrick said, “because there’s no substantial similarity” between the images produced by the models and those of the artists.

While Judge Orrick’s musings from the bench have no direct bearing on the now multiple similar cases filed by creators and rights owners over the use of their works to train generative AI models, as the first to the court house door the San Francisco case is something of a case of first impression involving the current generation of AI technology. As such, having it summarily dismissed over the attribution problem does not bode well for those other cases, or for hopes of using the currently well-established copyright case law and statutory text to convince (force?) AI developers to accept a licensing regime for the use of copyrighted works in training datasets.

Nor does it bode well for efforts to create and operate such a system. If attribution cannot be established for purposes of liability, it will be difficult to rely on for purposes of remuneration.

Get the latest RightsTech news and analysis delivered directly in your inbox every week
We respect your privacy.