Extra

SCOTUS: No Statute of Limitations on Copyright Damages

The Copyright Act of 1976 stipulates that a plaintiff must file an infringement suit “within three years after the claim accrued.” For many years, courts have disagreed over when that three-year clock actually starts ticking.

Some courts, including some Circuit Courts of Appeal, have held it starts running when the alleged infringement occurred and that suits filed three or more years after that time are not valid. Other courts have held that the statute of limitations only start running when the plaintiff discovers, or reasonably could have discovered, the infringement, even if it occurred more than three years prior — an interpretation known as the “discovery rule.”

Can Regulation Deep Six Deepfakes?

The National Institute of Standards and Technology (NIST), a basic science and research arm of the Commerce Department best known, if at all, for tackling knotty challenges like accurately centering quantum dots in photonic chips and developing standard reference materials for measuring the contents of human poop used in medical research and treatments, last week took up the problem of identifying AI generated and manipulated audio, video, images and text.

Tasked by President Biden’s Executive Order on AI with helping to improve the safety, security and trustworthiness of AI systems, NIST has issued a GenAI Challenge inviting teams of researchers from academia, industry and other research labs to participate in a series of challenges intended to evaluate systems and methods of identifying synthetic content.

The TikTok Follies

Congress has passed, and President Biden has now signed, a bill requiring ByteDance to sell TikTok to an American buyer or American-controlled company within 270 days (possibly extendable to a year), or face having the app banned from the U.S.

Things are not likely to work out quite as neatly as that forced choice would have it.

TikTok CEO Shou Zi Chew issued a defiant statement in response to the bill’s passage proclaiming “we aren’t going anywhere,” and vowing to challenge the law in court. “We are confident, and we will keep fighting for your rights in the courts,” he said. “The facts and the Constitution are on our side, and we expect to prevail again.”

AI Companies Get the Picture

One of the biggest challenges facing copyright owners in grappling with the rapid development of generative AI technology, apart from a murky legal status, has been market failure, as discussed here in previous posts. The amount of existing material needed to train gen-AI models is so great, and so varied, that gauging the value of any one piece of it to establish a market price for it for licensing purposes is often effectively impossible.

One group of rights owners is finding willing buyers among AI companies, however. An active, albeit for various reasons mostly sotto voce market has begun to emerge for the use of images held in large photo archives and by photo agencies and social media platforms, complete with per-unit industry and at least a nod toward creator attribution.

Spotlight On Data Transparency

We knew that OpenAI, Google and Meta relied on copyrighted material to train their generative AI models. The companies themselves have acknowledged as much by raising a fair use defense in the myriad lawsuits brought against them by copyright owners, including in the New York Times Co.’s copyright infringement lawsuit against OpenAI and Microsoft.

We also know that AI developers are increasingly desperate for new sources of high-quality data to train on as they rapidly exhaust the published contents of the World Wide Web, and are pushing the envelope in the pursuit of untapped resources.

Fixing AI’s Market Failure

Large Language Models (LLMs) require large amounts of data for training. Very large. Like the entire textual content of the World Wide Web large. In the case of the largest such models — OpenAI’s GPT, Google’s Gemini, Meta’s LLaMA, France’s Mistral — most of the data used is simply vacuumed up from the internet , if not by the companies themselves then by third-party bot-jockeys like Common Crawl, which provides structured subsets of the data suitable for AI training. Other tranches come from digitized archives like the Books 1, 2 and 3 collections and Z-Library.

In nearly all cases, the hoovering and archive-compiling has been done without the permission or even the knowledge of the creators or rights owners of the vacuumed-up haul.

Dancing With the AI Devil

Fresh off scaring the bejeezus out of many in Hollywood with demos of its text-to-video generator Sora, OpenAI now wants in. According to Bloomberg, top executives at the generative AI developer will hold a round of meetings this week with a number of film studios and Hollywood honchos to discuss what Sora can do for them.

We’ve discussed here before why everything that can be made with AI, will be in Hollywood. So it is no great surprise that studio folks would take the meetings. But appearing to get cozy with Sora right now carries significant risk for the studios.

Generative AI was recently at the center of extensive labor unrest in Hollywood that cost the studios the better part of a year’s worth of production. As a result of that unrest, they are also now bound by collective bargaining agreements with writers and actors that circumscribe what they can do unilaterally with tools like Sora.

Will the Price Be Right for AI Training Rights?

We’ve said it before, and now we can say it again: Don’t sleep on the Federal Trade Commission when it comes to a regulatory response to the rise of generative AI. On Friday, Reddit filed an amended S-1 registration statement for its planned IPO in which it disclosed that the FTC has begun investigating its data licensing program for AI training.

“[O]n March 14, 2024, we received a letter from the FTC advising us that the FTC’s staff is conducting a non-public inquiry focused on our sale, licensing, or sharing of user-generated content with third parties to train AI models,” the amended S-1 said. “Given the novel nature of these technologies and commercial arrangements, we are not surprised that the FTC has expressed interest in this area. We do not believe that we have engaged in any unfair or deceptive trade practice.”

Finetuning AI Copyright Infringement Claims

Stop me if you’ve heard this one, but a group of authors has filed a prospective class action lawsuit against the developer of a generative AI model alleging copyright infringement. Filed Friday (March 8) in the Northern District of California, the suit targets Nvidia, the chipmaker whose GPUs are widely used in data centers to handle the massive computing work required to train and manage generative AI models, but which also provides its own Large Language Models as part of its NeMo Megatron AI development tool kit.

The complaint names three plaintiffs, authors Abdi Nazemian, Brian Keene and Stewart O’Nan, but seeks money damages on behalf of “All persons or entities domiciled in the United States that own a United States copyright in any work” used in training the Nvidia LLM, known as NeMo Megatron.

Anything That Can Be Made With AI Will Be, In Hollywood

At the risk of belaboring the obvious, generative AI is now everywhere in the media and rights-based industries. It’s writing news articles and fan-fic e-books, it’s making music, it’s creating artwork. But no creative industry will be transformed by AI quite as much as movie and television production. The reason has as much to do with economics as technology.

Warner Bros.’ “Dune: Part Two” opened to a whopped $81.5 million domestically over the weekend, and $97 million internationally. It brought a welcome boost to theaters, which had seen the number of butts in seats come crashing down from the summer’s “Barbenheimer” high. And it showed that big-budget, effects-driven spectacles can still deliver for a studio, especially if they’re spectacular enough to justify release on large-format screens, like IMAX, which carry a premium ticket price and accounted for 48% of “Dune’s” domestic tally.

Get the latest RightsTech news and analysis delivered directly in your inbox every week
We respect your privacy.