Can Regulation Deep Six Deepfakes?

The National Institute of Standards and Technology (NIST), a basic science and research arm of the Commerce Department best known, if at all, for tackling knotty challenges like accurately centering quantum dots in photonic chips and developing standard reference materials for measuring the contents of human poop used in medical research and treatments, last week took up the problem of identifying AI generated and manipulated audio, video, images and text.

Tasked by President Biden’s Executive Order on AI with helping to improve the safety, security and trustworthiness of AI systems, NIST has issued a GenAI Challenge inviting teams of researchers from academia, industry and other research labs to participate in a series of challenges intended to evaluate systems and methods of identifying synthetic content.

Researchers can participate either as “generators,” tasked with producing the synthetic content “that can deceive the best algorithmic discriminators as well as humans,” or as “discriminators,” who will be tested “on their system’s ability to detect synthetic content created by large language models (LLMs) and generative AI models.”

The first round of challenges, which kicked off May 1st, and will address text-to-text AI generators. The second round will focus on text-to-image generators, with audio, video and code to follow.

While identifying the best approaches to identifying AI-created content after the fact won’t alone solve the problem of deepfakes, the process could give substance to future laws or regulations mandating their use by consumer-facing platforms.

The NIST challenge is the latest instance of a broad, if not quite coordinated series of initiatives by regulators and legislatures in the U.S. and elsewhere to tackle the problem of deepfakes and the related issue of AI-enabled disinformation.

That distinguishes the deepfake issue from other controversies that have arisen around generative AI, such as over the use of copyrighted works to train AI models, which so far have played out mostly in courts, through litigation.

Other examples include:

  • The Federal Trade Commission closed the comment period at the end of April for its proposal to extend its recently enacted ban on impersonating businesses or government agencies to cover the impersonation of individuals as well using AI. “Fraudsters are using AI tools to impersonate individuals with eerie precision and at a much wider scale. With voice cloning and other AI-driven scams on the rise, protecting Americans from impersonator fraud is more critical than ever,” FTC Chair Lina M. Khan said in announcing the proposal. “Our proposed expansions to the final impersonation rule would do just that, strengthening the FTC’s toolkit to address AI-enabled scams impersonating individuals.”
  • The NO FAKES Act in the U.S. Senate would create what would amount to a new class of federal intellectual property in an individual’s image, likeness and voice that would be transferrable by means of a license and assignable by contract, much as a copyright can be. At a recent Senate IP subcommittee hearing on the bill, the measure received an unusual (for these days) amount of bipartisan support and a shared sense of urgency around it among the senators. Motion Picture Association associate general counsel Ben Sheffner and SAG-AFTRA executive director Duncan Crabtree-Ireland, whose organizations were recently at odds over the use of AI in filmmaking, both testified in support of the bill, albeit with some differences on the particulars.
  • A group of U.K. lawmakers from across the political spectrum last week issued a report calling for laws to protect artists’ personalities from being copied by AI without permission and mandatory labelling of AI generated content.
  • Even the U.S. Copyright Office, which is expected shortly to begin issuing a series of reports based on its year-long study of AI, intends the first installment to focus on deepfakes and make recommendations to congress about possible legislative steps to address them.

Industry has also been active in pursuing technical solutions to addressing deepfakes and related issues. The Adobe-led C2PA initiative recently announced broadening adoption of its Content Credentials to label and identify authentic content to help distinguish it from AI-manipulation media, which would lack the credentials. France’s Ircam Amplify recently introduced a tool for recognize content created by AI that it claims is accurate as much as 98.5% of the time.

Part of the difference in how the copyright related controversies and the problem of deepfakes is playing out — litigation vs. regulation — is down to the nature of the two problems.

The legal questions at issue in the copyright controversies related to generative AI, for all the sturm und drang around them, are relatively straightforward. The use of copyrighted works to train generative AI models either does, or does not fall within the outer perimeter of fair use, and courts will have ample statutory and case law to draw on in deciding the question. And in either case, the way forward is clear. It will either need to be licensed, or it won’t, with all that either entails.

In contrast, there’s little case law that’s on all fours with deepfakes. It’s not even clear they’re illegal under current law or infringe anyone’s statutory or common law right. There are state level right-of-publicity statutes that may apply in some cases, and commercial exploitation of deepfakes may violate certain antitrust or unfair trade practice provisions. But those are both second-order phenomenon that do not flow directly from the creation of deepfakes themselves.

The problem simply does lend itself easily to litigation. A lot is hanging on whether regulators and legislatures can or will address it any more effectively than the courts.

Post a Comment

This site uses Akismet to reduce spam. Learn how your comment data is processed.

Get the latest RightsTech news and analysis delivered directly in your inbox every week
We respect your privacy.