Extra

Copyright and AI: Where’s the Harm?

Berkley law professor Pamela Samuelson has ruffled more than a few feathers among creators and rights owners over the years. In her role as co-founder and chair of the Authors Alliance, her seats on the boards of the Electronic Frontier Foundation and Public Knowledge, and in spearheading the American Law Institute’s controversial restatement of copyright law, she has been a high-profile and vocal skeptic of expansive views of copyright protections, particularly in the realm of digital platforms and technologies.

News Value: Is AI On the Money?

Facing a potentially ruinous lawsuit from the New York Times over the unlicensed use of the newspaper’s reporting to train its GPT Large Language Model, OpenAI is putting out the word that it is not opposed to paying publishers for access to their content, as it recently did with Axel Springer.

“We are in the middle of many negotiations and discussions with many publishers. They are active. They are very positive,” Tom Rubin, OpenAI’s chief of intellectual property and content, told Bloomberg News. “You’ve seen deals announced, and there will be more in the future.”

All the News That’s Fit to Scrape

If you’re reading this post you likely know by now that the New York Times last week filed a massive copyright infringement lawsuit against OpenAI and Microsoft over the unlicensed use of Times content to train the GPT line of generative AI foundation models.

It’s tempting to view this as the Big One, the Battle of the Titans that will make it all the way to the Supreme Court for a definitive resolution of the most contentious question in the realm of AI and copyright. It’s the New York Times, after all, one of the premier names in journalism anywhere in the world, and one of the few publishers with the resources to take on the tech giants and pursue the case to the end.

Revealing Sources: The News on AI

For news publishers, AI can giveth, and AI can taketh away. On the latter side of the ledger, publishers are in a cold sweat over Google’s “Search Generative Experience,” (SGE) product, which the search giant has been testing for the past several months. The tool, trained in part on publishers’ content, uses AI to generate fulsome responses to users’ search queries, rather than merely providing links to websites where answers might be found.

Last week, the Arkansas-based publisher Helena World Chronicle filed a prospective class-action lawsuit against Google, accusing the search giant of anti-competitive practices and specifically citing Search Generative Experience.

What’s In a Name? Seeking An Answer to Deep Fakes

When it comes to AI and intellectual property, most of the focus has been on the use of copyrighted works in training generative AI models and the patent and copyright eligibility of inventions or works produced with the technology. Insofar as the political deal European Union officials reached over the weekend on the AI Act addresses IP, it confines itself to requiring foundation-model developers to document and disclose their training data and the labeling of AI-generated content. Training and IP eligibility have also been the main focus of AI litigation to date in the U.S.

But the rapid spread and growing ease of so-called deep fake apps have led to growing calls to provide protection against the unauthorized appropriation of a person’s name, image and likeness (NIL) or celebrity. The calls run like a secondary theme through comments filed by with the Copyright Office in its current study of AI and copyright (see herehere and here), and the issue played a starring role in the labor strife that recently rocked Hollywood.

EU AI Act: Down to the Wire (Update)

Negotiations toward a final text of the European Union’s AI Act are going down to the wire this week as the final “trilogue” session among the EU Parliament, Commission and Council is scheduled for Wednesday (Dec. 6). The pressure is on to reach an agreement before the end of the year, as the June 2024 EU Parliamentary elections loom over talks. If agreement can’t be reached before then, there’s a danger that the process would have to be restarted with a new Parliament and new leadership in the Council, which could potentially scuttle the whole project.

Yet despite the pressure, the parties to the current talks appear to be farther apart than where they started, endangering what had been touted as the world’s first comprehensive regulatory regime for of AI. The consensus on the basic structure of the proposed regulations that seemed at hand in the summer was thrown into turmoil last month when France, supported by Germany and Italy, suddenly reversed its position and embraced “mandatory self-regulation” via codes of conduct for the largest foundation models instead of the once-agreed tiered system of binding obligations.

The Future of Generative AI Might Be Smaller Than You Think

The distinguishing characteristic of large language models (LLMs) is, as the name implies, their sheer size. Meta’s LLaMA-2 and OpenAI GPT-4 are each comprised of well more than 100 billion parameters — the individual weights and variables they derive from their training data and use to process prompt inputs. Scale is also the defining characteristic of the training process LLM’s undergo. The datasets they ingest and are almost incomprehensively large — equivalent to the entire World Wide Web — and require immense amounts of computing capacity and energy to analyze.

Artificial Intelligence, Real Turmoil (Updated 2X)

Lenin famously said, “There are decades where nothing happens; and weeks where decades happen.” Last week offered a fair approximation of the latter in the world of AI. On Monday, Ed Newton-Rex, the widely respected head of StabilityAI’s audio team, abruptly resigned over a disagreement with the company’s position on the use of copyrighted works to train generative AI models. On Friday, OpenAI even more abruptly sacked its high-profile co-founder and CEO Sam Altman and set a new standard for mangling internal communication and investor relations. And somewhere along the way, Meta reportedly dissolved its Responsible AI team and reassigned its staffers to other units.

The turmoil continued through the weekend. OpenAI president Greg Brockman quit in solidarity with Altman, employees threatened mass resignations and its blindsided investors demanded a restoration and lined up to finance any new AI venture Altman and Brockman might launch. By Saturday evening, talks reportedly were underway between Altman and the company to bring him and his team back into the fold.

FTC Not Waiting for Congress or Courts To Take On Generative AI

If you’re wondering where a robust regulatory response to generative AI could come from, don’t sleep on the Federal Trade Commission. While courts and the U.S. Copyright Office are still feeling their way through the complex maze of technical, legal and policy questions raised by the generative AI technology, the FTC, as a law enforcement agency, has a narrower brief and potent tools it already can bring to bear on many of those questions.

It also has a green light from the White House to use those tools. In President Joe Biden’s recent executive order on AI the FTC was encouraged to take a leading role in the government’s efforts to put guardrails around the development and use of AI systems.

Another AI Lawsuit, Another Dismissal

For the second time in a week, a federal district court judge in California has sent the plaintiffs in an AI copyright infringement lawsuit back to the drawing, dismissing most of the charges brought against Facebook-parent Meta over its Llama generative AI tool while granting them leave to amend and refile their complaint.

The ruling partially granting Meta’s motion to dismiss came down on Friday, one week after another federal judge dismissed most of the copyright infringement charges brought against StabilityAI and other AI image generators while granting leave for plaintiffs to amend and refile their complaint.

Get the latest RightsTech news and analysis delivered directly in your inbox every week
We respect your privacy.