What’s In a Name? Seeking An Answer to Deep Fakes

When it comes to AI and intellectual property, most of the focus has been on the use of copyrighted works in training generative AI models and the patent and copyright eligibility of inventions or works produced with the technology. Insofar as the political deal European Union officials reached over the weekend on the AI Act addresses IP, it confines itself to requiring foundation-model developers to document and disclose their training data and the labeling of AI-generated content. Training and IP eligibility have also been the main focus of AI litigation to date in the U.S.

But the rapid spread and growing ease of so-called deep fake apps have led to growing calls to provide protection against the unauthorized appropriation of a person’s name, image and likeness (NIL) or celebrity. The calls run like a secondary theme through comments filed by with the Copyright Office in its current study of AI and copyright (see herehere and here), and the issue played a starring role in the labor strife that recently rocked Hollywood.

The issue is also the subject of draft legislation introduced in October by a bipartisan group of Senators, including the chair and ranking member of the intellectual property subcommittee of the Judiciary Committee. Thirty-six states currently provide some form of publicity right but the details vary from jurisdiction to jurisdiction. The Nurture Originals, Foster Art, and Keep Entertainment Safe (NO FAKES) Act, would create a uniform federal right of publicity.

It would also significantly expand that right. Whereas state-level protections are generally available only to persons whose name or image has some commercial value, as currently drafted the NO FAKES Act conspicuously omits any mention of commercial value, either to the name and image of the victim or the use to which the misappropriation is put. Instead, it would simply prohibit anyone from creating a digital replica or appropriating anyone else’s identify without their consent, regardless of any commercial consideration.

In another major break from state publicity laws, the federal bill would not limit damages to the commercial gain or loss of either the victim or the defendant. Instead, it would provide statutory damages of up to $5,000 per violation, including for each instance of knowingly distributing a digital fake.

The provision of statutory damages would put the federal publicity right on par with copyright. While the $5,000 per violation limit is a fraction of the $150,000 per work in damages for willful copyright infringement, the threat of statutory damages, particularly at scale, would provide potential victims of deep fakes with a powerful hammer to go after violators and could serve as a powerful disincentive to engage in deep fakes.

Critically, the draft bill would also create potential secondary liability for developers whose models were used in applications that produce deep fakes, even if the developers were not themselves directly involved in its production of distribution. The text stipulates that it is not a defense to a claim to argue that the accused developer “did not participate in the creation, development, distribution, or dissemination of the applicable digital replica.”

For now, the NO FAKES Act is merely a draft. There has been no movement on the bill since it was introduced, and with an election year starting in a matter of weeks, little if any non-essential or non-vote-stoking legislation is likely to move until the next congress. Many of the provisions in the current draft, moreover, will face fierce opposition from AI developers and their allies on Capitol Hill, promising a long and bruising fight if the bill ever does receive serious consideration.

The draft also raises nearly as many questions as it seeks to answer. Would invoking an artist’s or author’s name in a chatbot prompt to produce something in a similar style be a violation, even if prompting a human to do so would not be? Would the artist’s or author’s “style” itself be protected, and if so, what elements would define or distinguish their style?

Creating what could amount to a new federal intellectual property regime, in other words, would be a very heavy legislative lift.

Unauthorized, AI-powered deep fakes have quickly become the bane of celebrities and politicians, who are among their most frequent targets. But the same technology also holds the tantalizing promise of a lucrative new licensing business for artists and performers and their estates, as hinted at by the recent authorized use of Jimmy Stewart’s voice to read a bedtime story for the sleep and meditation app Calm.

If something like the NO FAKES Act were to become law, it could lead to a flood of new litigation between artists and performers on the one hand, and AI technology developers on the other. Yet it could also turbo-charge conversations already underway between rights owners and developers looking to leverage AI to build new products and creation tools. A federal standard could also set clear parameters for future negotiations between producers and creative workers over the use of AI.

In the meantime, however, the technology to conjure and manipulate sounds and images will only grow more powerful.

Get the latest RightsTech news and analysis delivered directly in your inbox every week
We respect your privacy.