Everyone Agrees on the Need to Do Something about Deepfakes, Just Not how to Do It

The BSA Software Alliance, among the heavy hitting tech industry associations in Washington, counts a number of major AI companies among its members, including Adobe and Microsoft. But this week it plans to release a policy statement urging Congress to “take steps” to protect artists from the spread of unauthorized, AI-generated deepfakes. The statement, provided to RightsTech in advance of release, lists eight key principles, including creating a new right for artists to authorize or prevent the commercial dissemination of digital replicas of their name, image, likeness or voice, and prohibiting the commercial trafficking in any algorithm, software, technology or service that has the “primary purpose” of creating or disseminating such replica “knowing that this act was unauthorized.”

It also urges Congress to consider additional protections for graphic artists and writers against AI-generated “forgeries.”

The BSA would appear to have several goals in mind in releasing the statement this week. Sen. Chris Coons (D-Del.), one of the authors of the No Fakes Act, is expected to release an updated version of the bill, also this week, that is expected to take a harder line against AI replicas, including a ban on their production, not just their commercial distribution.

The BSA proposal seems aimed at carving out some safe harbors for AI companies modeled on the copyright safe harbors in the Digital Millennium Copyright Act. For instance, the BSA proposal recommends creating “incentives” for the quick removal of deepfakes from online platforms “consistent with the requirements of 17 USC § 512.” That’s a reference to the notice-and-takedown procedures outlined in the DMCA, presumably including the exemption from liability for platforms if they respond in a timely manner.

BSA also seeks to shield from liability any use of a digital replica “for statutorily permitted purposes such as criticism, comment, news reporting, teaching, scholarship, or research (in line with 17 USC § 107)” — that is, covered by fair use.

Another goal of the BSA statement seems to be to head off the emergence of a patchwork of potentially tougher state regulations, such as Tennessee’s ELVIS Act and bills being considered in New York and California, through preemptive federal legislation.

How much traction the BSA statement will get against the growing momentum behind tougher action on deepfakes is unclear. What is clear from the statement is that the legislative and regulatory battle over how best to protect artists and others from misappropriation of their identity by AI is just getting warmed up. While there is broad consensus on the need to do something about the unchecked spread of deepfakes, that consensus is already breaking down over exactly what to do, and how.

The BSA is hardly the only industry association to express concern over the adoption of what they see as overly restrictive laws and regulation on the use of AI. As noted here in previous posts, the Motion Picture Association in particular is concerned that heavy handed regulation of AI use, including in the name of artists, could hinder the Hollywood studios’ plans to embrace the technology for use in film and television production.

Given how rapidly the “quality” of deepfakes is improving, the need for action is urgent. But that doesn’t mean it will happen quickly.

Get the latest RightsTech news and analysis delivered directly in your inbox every week
We respect your privacy.