Fighting Deep Fakes: IP, or Antitrust? (Updated)

The Federal Trade Commission last week elbowed its way into the increasingly urgent discussion around how to respond to the flood of AI-generated deep fakes plaguing celebrities, politicians, and ordinary citizens. As noted in our previous post, the agency issued a Supplemental Notice of Proposed Rulemaking (SNPRM) seeking comment on whether its recently published rule prohibiting business or government impersonation should be extended to cover the impersonation of individuals as well.

The impersonation rule bars the unauthorized use of government seals or business logos when communicating to consumers by mail or online. It also bans the spoofing of email addresses, such as .gov addresses, or falsely implying an affiliation with a business or government agency.

The new proposal would extend those prohibition to include any “person, entity, or party, whether real or fictitious, other than those that constitute a business or government” under the current rule. It would also create liability for “providing goods or services with knowledge or reason to know that those goods or services will be used to (a) materially and falsely pose as, directly or by implication, a government entity or officer thereof, a business or officer thereof, or an individual, in or affecting commerce… or (b) materially misrepresent, directly or by implication, affiliation with, including endorsement or sponsorship by, a government entity or officer thereof, a business or officer thereof, or an individual, in or affecting commerce.”

The SNPRM itself discusses the proposal primarily in reference to so-called romance scams and relationship-based scams, particularly those that target older consumers such as by posing as a target’s grandchild to fool them into sending money.

In a news release announcing the SNPRM, however, the commissioners made it clear the aim to target celebrity impersonations and AI voice clonings such as have lately targeted recording artists and other voice performers. “The agency is taking this action in light of surging complaints around impersonation fraud, as well as public outcry about the harms caused to consumers and to impersonated individuals,” the statement said. “Emerging technology – including AI-generated deepfakes – threatens to turbocharge this scourge, and the FTC is committed to using all of its tools to detect, deter, and halt impersonation fraud.”

Those tools consist largely of the powers spelled out in the Federal Trade Commission Act (15 USC §44), and include among other things the power “to (a) prevent unfair methods of competition and unfair or deceptive acts or practices in or affecting commerce; (b) seek monetary redress and other relief for conduct injurious to consumers; (c) prescribe rules defining with specificity acts or practices that are unfair or deceptive, and establishing requirements designed to prevent such acts or practices.”

The FTC Act is a component of the Clayton Antitrust Act and rests on Congress’ power under the Constitution to regulate commerce.

The FTC’s latest proposal comes as Congress is considering two bills aimed at combating the same problem, the NO AI FRAUD Act in the House and the NO FAKES Act in the Senate, but which take a very different approach. The bills would create a new category of federal intellectual property, alongside copyright, patents and trademarks, providing protection against unauthorized use of an individual’s name, likeness, image and voice.

It would not be the first time the FTC has sought a voice in the intellectual property aspects of the AI-regulation discussion. Last fall, the agency hosted a “roundtable discussion” on the impact of generative AI on the creative industries, and later filed formal comments with the U.S. Copyright Office as part of the latter’s congressionally mandated inquiry into copyright and AI.

That later move drew criticism from some copyright scholars, notably Berkley Law professor Pamela Samuelson, the keynote speaker at the recent RightsTech AI Summit, who accused the FTC in comments to the Copyright Office of straying out of its lane by butting into a copyright debate where it has no legal authority. The scholars also accused the FTC of misinterpreting copyright law and the relevant case law.

For all that, the clash does highlight an important and unsettled question at the heart of the debate around deep fakes: Are they, should they be considered primarily an infringement of an intellectual property right, or should they be regulated under antitrust law as an unfair and deceptive trade practice?

As a practical matter, the antitrust path likely provides the shortest route to concrete action. Unfair and deceptive trade practices are already illegal, and the FTC has clear statutory authority to set rules “defining with specificity acts or practices that are unfair or deceptive.” Those definitions could conceivably include the creation and distribution of deep fakes.

Antitrust enforcement, however, by itself would not produce a legally defined, marketable asset such as a property right that could serve as the foundation for a licensing system, such as many artists and rights owners are hoping to see emerge around their likeness, voice and celebrity.

While roughly half the states in the U.S. have some sort of “right of publicity” statute on the books, those are generally grounded in privacy law, court-developed common law, and states’ authority to regulate the sale of goods and services within their borders. They do not confer a tradeable intellectual property right.

Creating a new such property right will likely require new federal legislation. While politicians are themselves among the prime victims of deep fakes, especially in election years, and have an obvious incentive to act quickly, enacting a far-reaching, complex and highly technical measure such as creating a new intellectual property right would be a very heavy legislative lift under the best of circumstances. In the current environment on Capitol Hill, where Congress can barely carry out the most basic responsibilities of governing, it’s almost inconceivable.

The bills that have been introduced thus far are more like position statements than serious legislative proposals, meant to advertise their sponsors’ leadership on the issue (and attract donors) with an eye toward a hoped-for future Congress where it might be possible to actually legislate.

In other words, not soon.

Further, while it may be a matter best left to actual legal and constitutional experts than to a rando RightsTech blogger, it seems at least conceivable that questions could be raised as to Congress’ power to create a new intellectual property right in one’s name, image, likeness and voice.

Congress’ power to enact patent and copyright laws is conferred by the so-called intellectual property clause in Article 1 §8 of the Constitution. There, Congress is empowered “To promote the Progress of Science and useful Arts, by securing for limited Times to Authors and Inventors the exclusive Right to their respective Writings and Discoveries.”

It’s a power grounded in human creativity, in original acts of authorship and invention. An individual’s image and likeness are, by and large, tangible phenomena in the world — facts — not products of creative or original acts of authorship. It might require some creative acts of legislative draftsmanship to fit them under the Constitution’s IP clause.

Alternatively, Congress could explicitly regulate deep fakes under its power to regulate interstate commerce, essentially federalizing and superseding existing state statutes. But again, that approach might not provide the predicate for an orderly market in images and voices.

In the meantime, the tools to create deep fakes get more powerful and easier to use by the day.

Update: On Tuesday, leaders in the U.S. House unveiled a new bipartisan AI task force to try to jumpstart congressional action on AI, including on deep fakes. House Republican speaker Mike Johnson, and Democratic leader Hakeem Jeffries said the task force would be charged with producing a comprehensive report and consider “guardrails that may be appropriate to safeguard the nation against current and emerging threats.”

Get the latest RightsTech news and analysis delivered directly in your inbox every week
We respect your privacy.