Sony Music Group last week began sending legal letters to roughly 700 AI developers and music streaming services demanding detailed information on whether, how and by what means the recipients or their affiliates have used SMG-owned content “to train, develop or commercialize any of your AI systems,” and a “description of the manner in which such SMG Content was accessed and/or reproduced and/or extracted by you or your affiliates or any third party contracted to do so on your behalf.”
The text of the letters has not yet been released publicly. Sony separately posted a “Declaration of AI Training Opt-Out” on its website.
Sony Music Publishing (SMP) and Sony Music Entertainment (SME), on behalf of themselves and their wholly owned or controlled affiliates, are making this affirmative, public declaration confirming that, except as specifically and explicitly authorized by either SME or SMP, as the case may be, each of them expressly prohibits and opts out of any text or data mining, web scraping or similar reproductions, extractions or uses (“TDM”) of any SME and/or SMP content (including, without limitation, musical compositions, lyrics, audio recordings, audiovisual recordings, artwork, images, data, etc.) for any purposes, including in relation to training, developing or commercializing any Al system, and by any means, including by bots, scrapers or other automated processes, in each case to the full extent permitted by applicable law in all relevant jurisdictions.
The full list of Sony demands is almost breathtaking in its sweep, running to nine entire pages in the letter, according to multiple reports. The letter gives recipients until the end of this week to respond.
The move seems clearly to be a prelude to something, presumably further legal action against one or more of the recipients.
“Due to the nature of your operations and published information about your AI systems, we have reason to believe that you and/or your affiliates may already have made unauthorized uses (including TDM [text and data mining]) of SMG Content in relation to the training, development or commercialization of AI systems,” the letter reads at one point. “This letter serves to put you on notice directly, and reiterate, that [Sony’s labels] expressly prohibit any use of [their] content.”
As of this writing, none of the recipients has responded publicly or acknowledged receiving the letter. If and when any do respond, however, some are likely to raise questions as to what legal authorities Sony believes it is acting under in making its demands.
The letter references the transparency requirements in the European Union’s AI Act, including providing public summaries of copyrighted works used in training. The Act is not yet in force, however, so AI developers would seem to have no immediate obligation to comply.
At least some of the recipients of the letter, moreover, such as music streaming services, are not clearly covered by those provisions. Nor are at least some of the AI developers likely targeted, as their models are not likely to fall into the category facing the most stringent disclosure requirements as defined in the law.
Further, many of the apparent recipients, including OpenAI, Microsoft and Google are based in the U.S., where the question of whether the use of copyrighted works in AI training is permissible under the fair use doctrine is yet to be resolved.
The overall impression from the letter writing campaign is that it is meant to put the AI world on notice that Sony intends to escalate the current legal battle between rights owners and AI companies over the use of copyrighted material in training.
At one point, the letter claims that any use of “automated analytical techniques aimed at analyzing text and data in digital form to generate information, including patterns, trends and correlations,” is prohibited.
That essentially describes the basic operation of generative AI systems and Sony appears to be declaring it per se a copyright infringement, something no court has yet ruled.
It’s an indication that Sony Music feels it has the legal and legislative wind at its back and now is the time strike. It may be right.
The transparency requirements in the EU AI Act will take effect in a few months, legislation mandating similar transparency has been introduced in the U.S., and AI-created deepfakes are under legislative and regulatory assault everywhere. At the same time, AI developers are running out of high-quality data on which to train new models, handing leverage to owners of quality content catalogs. Many leading AI companies are also facing internal turmoil or financial challenges, potentially leaving them more vulnerable to external pressures.
We may be entering a new phase in the AI copyright wars.