Finetuning AI Copyright Infringement Claims

Stop me if you’ve heard this one, but a group of authors has filed a prospective class action lawsuit against the developer of a generative AI model alleging copyright infringement. Filed Friday (March 8) in the Northern District of California, the suit targets Nvidia, the chipmaker whose GPUs are widely used in data centers to handle the massive computing work required to train and manage generative AI models, but which also provides its own Large Language Models as part of its NeMo Megatron AI development tool kit.

The complaint names three plaintiffs, authors Abdi Nazemian, Brian Keene and Stewart O’Nan, but seeks money damages on behalf of “All persons or entities domiciled in the United States that own a United States copyright in any work” used in training the Nvidia LLM, known as NeMo Megatron.

Anything That Can Be Made With AI Will Be, In Hollywood

At the risk of belaboring the obvious, generative AI is now everywhere in the media and rights-based industries. It’s writing news articles and fan-fic e-books, it’s making music, it’s creating artwork. But no creative industry will be transformed by AI quite as much as movie and television production. The reason has as much to do with economics as technology.

Warner Bros.’ “Dune: Part Two” opened to a whopped $81.5 million domestically over the weekend, and $97 million internationally. It brought a welcome boost to theaters, which had seen the number of butts in seats come crashing down from the summer’s “Barbenheimer” high. And it showed that big-budget, effects-driven spectacles can still deliver for a studio, especially if they’re spectacular enough to justify release on large-format screens, like IMAX, which carry a premium ticket price and accounted for 48% of “Dune’s” domestic tally.

AI and the News: Deal, or No Deal?

Reddit, the self-anointed “front page of the internet,” sits atop a huge archive of original content. It contains more than a billion posts created by its 73 million average daily unique users self-organized into more than 100,000 interest-based communities, or subreddits, ranging from sports to politics, technology, pets, movies music & TV, health & nutrition, business, philosophy and home & garden. You name it, there’s likely to be a subreddit for it.

The scale and diversity of the Reddit archive, replete with uncounted links to all corners of the World Wide Web and made freely accessible via API, has long-been a highly valued resource for researchers, academics and developers building third-party applications for accessing Reddit communities. More recently, it has also eagerly been mined by developers of generative AI tools in need of large troves of natural language texts on which to train their models.

Fighting Deep Fakes: IP, or Antitrust? (Updated)

The Federal Trade Commission last week elbowed its way into the increasingly urgent discussion around how to respond to the flood of AI-generated deep fakes plaguing celebrities, politicians, and ordinary citizens. As noted in our previous post, the agency issued a Supplemental Notice of Proposed Rulemaking (SNPRM) seeking comment on whether its recently published rule prohibiting business or government impersonation should be extended to cover the impersonation of individuals as well.

The impersonation rule bars the unauthorized use of government seals or business logos when communicating to consumers by mail or online. It also bans the spoofing of email addresses, such as .gov addresses, or falsely implying an affiliation with a business or government agency.

Suddenly, Everyone is Adding Watermarks to AI Generated Media

With election season in full swing in the U.S. and European Union, and concern growing over deep-fake and AI-manipulated images and video targeting politicians as well as celebrities, AI heavyweights are starting to come around to supporting for industry initiatives to develop and adopt technical standards for identifying AI-produced content.

At last month’s World Economic Forum in Davos, Meta president of global Affairs Nick Clegg called efforts to identify and detect AI content “the most urgent task” facing the industry. The Facebook and Instagram parent began requiring political advertisers using its platforms to disclose whether they used AI tools to create their posts late last year. But it is also now gotten behind the technical standard developed by the Coalition for Content Provenance and Authenticity (C2PA) for certifying the source and history of digital content.

Copyright and AI: Where’s the Harm?

Berkley law professor Pamela Samuelson has ruffled more than a few feathers among creators and rights owners over the years. In her role as co-founder and chair of the Authors Alliance, her seats on the boards of the Electronic Frontier Foundation and Public Knowledge, and in spearheading the American Law Institute’s controversial restatement of copyright law, she has been a high-profile and vocal skeptic of expansive views of copyright protections, particularly in the realm of digital platforms and technologies.

News Value: Is AI On the Money?

Facing a potentially ruinous lawsuit from the New York Times over the unlicensed use of the newspaper’s reporting to train its GPT Large Language Model, OpenAI is putting out the word that it is not opposed to paying publishers for access to their content, as it recently did with Axel Springer.

“We are in the middle of many negotiations and discussions with many publishers. They are active. They are very positive,” Tom Rubin, OpenAI’s chief of intellectual property and content, told Bloomberg News. “You’ve seen deals announced, and there will be more in the future.”

All the News That’s Fit to Scrape

If you’re reading this post you likely know by now that the New York Times last week filed a massive copyright infringement lawsuit against OpenAI and Microsoft over the unlicensed use of Times content to train the GPT line of generative AI foundation models.

It’s tempting to view this as the Big One, the Battle of the Titans that will make it all the way to the Supreme Court for a definitive resolution of the most contentious question in the realm of AI and copyright. It’s the New York Times, after all, one of the premier names in journalism anywhere in the world, and one of the few publishers with the resources to take on the tech giants and pursue the case to the end.

Revealing Sources: The News on AI

For news publishers, AI can giveth, and AI can taketh away. On the latter side of the ledger, publishers are in a cold sweat over Google’s “Search Generative Experience,” (SGE) product, which the search giant has been testing for the past several months. The tool, trained in part on publishers’ content, uses AI to generate fulsome responses to users’ search queries, rather than merely providing links to websites where answers might be found.

Last week, the Arkansas-based publisher Helena World Chronicle filed a prospective class-action lawsuit against Google, accusing the search giant of anti-competitive practices and specifically citing Search Generative Experience.

What’s In a Name? Seeking An Answer to Deep Fakes

When it comes to AI and intellectual property, most of the focus has been on the use of copyrighted works in training generative AI models and the patent and copyright eligibility of inventions or works produced with the technology. Insofar as the political deal European Union officials reached over the weekend on the AI Act addresses IP, it confines itself to requiring foundation-model developers to document and disclose their training data and the labeling of AI-generated content. Training and IP eligibility have also been the main focus of AI litigation to date in the U.S.

But the rapid spread and growing ease of so-called deep fake apps have led to growing calls to provide protection against the unauthorized appropriation of a person’s name, image and likeness (NIL) or celebrity. The calls run like a secondary theme through comments filed by with the Copyright Office in its current study of AI and copyright (see herehere and here), and the issue played a starring role in the labor strife that recently rocked Hollywood.

Get the latest RightsTech news and analysis delivered directly in your inbox every week
We respect your privacy.