EXTRA Hollywood movies and TV shows have been brimming with computer-generated images for years, from superhero special effects to entire virtual sets digitally rendered in 3D with EPIC Games’ Unreal Engine. So it should have come as no surprise that studios would be keen to render actors digitally once the technology became available. It now has, thanks to recent advances in generative artificial technology. An actor’s likeness can be scanned or captured from existing footage, their voice taken from a recording, and fed into an AI engine to insert them into a scene or situation without the human original needed on set.
The technology could prove to be a vital cost-cutting tool for Hollywood. Having spent billions on a decade-long mergers and acquisitions spree, and sunk billions more into building out their own streaming platforms, the studios are now facing an uncertain period of retrenchment in a radically reshaped post-pandemic marketplace. The pressure from lenders and shareholders to cut costs is intense.
Understandably, actors are not at all happy at the prospect over being digitally replaced, and have now walked off sets and sound stages over the issue (among others).
In contract negotiations with the Screen Actors Guild, the Association of Motion Picture and Television Producers (AMPTP), representing the Hollywood studios as well as tech-based streamers like Amazon Prime, Apple TV+ and Netflix, proposed allowing producers to scan the likenesses of background scene fillers and use AI to insert them into scenes for which they were not originally filmed — without needing to pay them for the extra days’ work.
There is some dispute over whether the proposal would allow producers to use the AI likenesses in any situation, in perpetuity, or only within the project in which they initially appeared. But to the actors it’s all the same. If AI lets you scan and reuse an anonymous background extra it can do the same with an A-list star.
Yet, even if a deal with striking actors (and writers) can be reached, it won’t necessarily resolve all potential questions around the use of AI images in movies and TV shows.
Generative AI technology works differently from traditional CGI tools. In the latter, a human VFX artist operates the tools directly, adding color, texture, perspective and other aesthetic elements in addition to matching the results up with the principal photography. With AI, a computer does most of the heavy lifting. A human can enter prompts or instructions, but cannot directly manipulate the values of individual elements. The actual computational process within the system is opaque, and the output in any given iteration of an image has an inherent degree of unpredictability.
That unpredictability has led the U.S. Copyright Office to deny formal copyright protection to images generated by AI, even when they appear within a work that is otherwise copyrightable. In February, for instance, after previously accepting registration for a graphic novel titled “Zarya of the Dawn” submitted by author Kristina Kashtanova, the USCO reversed itself and partially rescinded the registration after learning the illustrations in the work had been generated by AI and therefore lacked sufficient human authorship to be eligible for copyright protection.
The following month, the USCO issued updated guidance on registering works to clarify its position. “If a work’s traditional elements of authorship were produced by a machine, the work lacks human authorship and the Office will not register it,” the guidance said. “Based on the Office’s understanding of the generative AI technologies currently available, users do not exercise ultimate creative control over how such systems interpret prompts and generate material. Instead, these prompts function more like instructions to a commissioned artist—they identify what the prompter wishes to have depicted, but the machine determines how those instructions are implemented in its output.”
Kashtanova protested the partial rescission of the “Zarya” registration, arguing that she put the images through multiple iterations, tweaking the prompts she gave the AI and selecting the final versions. But it was to no avail, at least so far.
Going forward, the USCO’s guidance said, “In the case of works containing AI-generated material, the Office will consider whether the AI contributions are the result of ‘mechanical reproduction’ or instead of an author’s ‘own original mental conception, to which [the author] gave visible form.’ The answer will depend on the circumstances, particularly how the AI tool operates and how it was used to create the final work. This is necessarily a case-by-case inquiry.”
The “Zarya at Dawn” ruling and the updated guidance, should it stand, could raise a host of practical challenges for artists and rights owners, including, potentially, filmmakers. If a film is found to contain both copyrighted elements and uncopyrightable AI elements, could anyone take those uncopyrighted elements and use them elsewhere? How would such composite works fit into prevailing film licensing models? Will uncertainty around AI images lead studios to limit their use of the technology?
If dialog is in the script but comes out of an AI-generated image on screen would the image be unprotected even as the dialog, as part of the copyrighted script, remains protected? Far from theoretical, the issue has already come up in the context of audiobooks that rely on AI-generated voice narration. While the underlying composition may be protected by copyright, the sound recording is not considered protectable because of the lack of human input.
The industry, as well as Congress, the courts, the USCO and international copyrighted authorities, are just beginning to work through those issues. While none are insoluble in principle, it will take time to get all parties on the same page. For the studios to rush into relying on AI generated images in films and television shows as a short-term cost cutting measure could be a recipe for long-term long-term costs in uncertainty.