EXTRA The U.S. Copyright Office held the second of its planned series of four listening sessions on generative AI last week, this time with a focus on visual arts. But if the officials were hoping to hear how Congress and the courts can or should respond to the challenges presented by the new technology they may have been disappointed.
While there was plenty of robust discussion over allegations of copyright infringement and whether images produced by or with AI tools should to eligible for copyright protection, one major theme to emerge from the three-hour event was the need to look beyond copyright law alone to address those questions.
“Just the way that this field is developing and things are shaking out, they have implications for copyright law, perhaps more than any other field,” said Daniel Takash, regulatory fellow at the public policy think tank The Niskanen Center. “But the second point I’d like to make is that as important as copyright is, I don’t think it should be the final word, or even necessarily the most consequential word on developments in AI. If we’re talking about general safety, job dislocation or other issues that are separate from, even though they may be related to intellectual property, I think it’s important that they take priority in any discussion.”
James Silverberg, CEO of the American Society of Collective Rights Licensing (ASCRL), which collects licensing fees outside the U.S. on behalf of illustrators and photographers, was similarly skeptical that copyright law can fully address the challenges faced by artists and rights owners from generative AI. That’s especially so, he averred, with respect to the problem discussed here previously of attributing particular generative AI outputs to particular training inputs in the context of large language and latent diffusion models.
“We need to be aware that the current copyright paradigm is not well suited to the promotion of art and authorship in the context of how AI generates visual artwork,” Silverberg said. “For example, one of the main challenges with AI generated works is that the existing copyright laws are ill equipped to preserve the economic benefit of authors when their material is learned or ingested, or when uncopyrightable styles are appropriated.”
He added, “Much of our discussion today is already focused on the failures, challenges, and uncertainty of applying the existing law and debating its application in the context of ingestion. And this, in and of itself, may be proof that copyright law is, at best, problematic and uncertain as a solution to the problem of author protection.”
Silverberg’s solution to the problem is to focus on remuneration to artists for the use of their works in AI models, rather than on the narrow legal questions of infringement and fair use.
“ASCRL believes that we need a new way of thinking about how we should implement the constitutional premise that we reserve to authors their rights and their ability to receive compensation, because the current copyright system is not achieving and cannot really achieve the constitutional goal in the AI context,” he said. “So. to address this challenge, ASCRL recommends that we do not entirely focus on the niceties of infringement issues, of interim copying fair-use factors, and we move towards legislatively implementing collective licensing systems like those that are currently used very successfully in many foreign countries.
“These systems serve our constitutional objectives and facilitate licensing and the use of AI, and create a more balanced system that recognizes the needs of the AI community as well as the author’s whose works or work attributes are ingested into these systems,” he continued. “We are hoping to level the playing field by adopting non-title specific, non-author specific compensation where works cannot be specifically identified. in order to compensate for uses.”
Arming artists with the tools to protect themselves was another popular theme.
“We want our tools to be good for enterprises and the creative community when it comes to copyright, and we know that the issue of training is one where the creative community has concerns,” Adobe senior director and associate general counsel J. Scott Evans said. “For this reason, through a technology Adobe developed called Content Credentials, we’re enabling artists to attach a “do-not-train” tag that will travel with the content wherever it goes. With industry adoption, it is our hope that this tag would prevent the training on content that has the tag. We are working with generative AI technology companies to respect these tags from an output standpoint.”
Legislating compliance with such a tag would further bolster the case for a metadata-based approach to the problem, Sheppard Mullin attorney James Gatto suggested. “There’s a lot of tools out there that can be used that help mitigate the problem,” he said. “Whether those tools should be mandated, or there’s some other role the Copyright Office can play with respect to them, would be helpful.
“Should AI tool providers be required to be more transparent on the content they use to train their models? I think that’s an important issue,” he added. “Should there be greater use of tools that prevent AI from them from using copyrighted works to train AI, similar to how robots.txt works to prevent search engines from indexing certain web content? The technology is there, and some of the concerns can be [addressed] if these tools become mandated.”
Another proposal, not nearly as well received by the other panelists, was to remove the the question of AI training from the realm of copyright altogether.
“There certainly are a number of very interesting questions about AI and Copyright. I’d like to focus on one of them, which is the intersection of AI and copyright infringement, which some of the other panelists have already alluded to,” offered Curt Levey of the conservative legal research and advocacy organization The Committee for Justice. “I and the Committee for Justice favor strong protection for creators, because that’s the best way to encourage creativity and innovation. But at the same time, I was an AI scientist long ago in the 1990s, before I was an attorney, and I have a lot of experience and how AI, that is, the neural networks at the heart of AI, learn from very large numbers, of examples. And at a deep level its analogous to how human creators learn from a lifetime of examples, and we don’t call that infringement when a human does it. So it’s hard for me to conclude that it’s some infringement when done by AI.”
“I think it’s important that we properly characterize the training process,” added Ben Brooks, head of public policy at Stability AI, the developer of the Stable Diffusion AI image generating tool. “These models are not, as is sometimes being described, you know, a collage machine or a search index for images. These models review pairs of text captions and images, to learn the relationships, again, between words and visual features…
“Stable Diffusion, notoriously struggles to generate hands, and so it produce 3 finger hands or 12 finger hands, because it doesn’t know that a hand typically has 5 fingers and it isn’t searching a database of the many images, with hands,” Brooks continued. “Instead, it has learned that a hand is a kind of flesh-colored artifact, typically accompanied by a number of sausage-like appendages. And that has real implications for how we should think about AI training and generation. In other words, these models are using knowledge learned from reviewing those text and image pairs to help the user generate a new work. They’re not using the original images themselves, and those images are nowhere in the AI model.”
Karla Ortiz, one of the artists currently suing Stability AI, was having none of the AI-to-human analogy.
“I can tell you that anthropomorphizing these tools to equate it as human-like is a fool’s errands,” she said. “I’ve spoken to countless machine learning experts…and they all agree. That is not what’s happening. This is a machine. This is mathematic algorithms. You cannot equate it to a human. And to further add the perspective of an artist, an artist doesn’t look at, like, 100,000 images, and is able to generate hundreds of images within seconds. An artist cannot do that.
“Yes, I have my influences, but it’s not the only thing that goes into my work,” Ortiz continued. “My life, my experiences, my perspective, my technique, all of that goes into the work. Furthermore, something that I feel like a lot of people are missing in these discussions is technical artistry. One of the hardest things you can do, ever, in the arts is to successfully mimic another artist style or another person’s work. It’s the hardest thing. I consider myself masterful. I can’t even do it.”
Just how like man, or machine generative AI models truly are is perhaps a question better put to philosophers than to policymakers. But it’s a question policymakers are unlikely to escape.