Extra

New ISO Standard Points to AI Opt-Out

Last month, the International Standards Organization (ISO) gave final approval to a new, open technical standard for a machine-readable content identifier that could provide creators and rights owners with a powerful new tool to regulate the use of their works in a variety of contexts.

Unlike other product or works identifiers, such as the music industry’s ISRC and ISWC standards that are typically assigned to a work or file by an outside authority or industry body, the new International Standard Content Code (ISCC) is algorithmically derived from the media file itself, and can be used for any type of digital media content, from text to music to images.

Don’t Count Out TikTok

For a company facing a legal death sentence, TikTok sure isn’t acting like it plans to go down anytime soon. Either that, or its hedging its bets.

This week, the ByteDance-owned social media platform, along with eight named users, filed their expected formal lawsuit against the U.S. government challenging the law passed by Congress and signed by President Biden earlier this year requiring TikTok to be sold to a non-Chinese owned company by January 19, 2025 or face permanent banishment from the United States.

Bundling Up

Reps. Ted Lieu and Adam Schiff, both Democrats from California and members of the House Judiciary Committee, along with Sen. Marsha Blackburn, Republican of Tennessee sent a letter to U.S. Register of Copyrights Shira Perlmutter last week raising “serious questions” as to whether Spotify is abusing the collective licensing system for mechanical rights by bundling its previously standalone audiobook service into its premium music subscription tier. The bundling arrangement has the effect of lowering the royalty rate Spotify pays to music publishers and songwriters under the terms of the most recent rate-setting agreement adopted by the Copyright Royalty Board.

Microsoft Could Be the Big Winner From Apple’s OpenAI Deal

Microsoft, Google and other rivals raced ahead, and jittery with anticipation that the House that Jobs built would soon announce a category-redefining advance of the technology.

What they got at this week’s Worldwide Developers Conference was a clearly articulated but hardly revolutionary implementation of existing AI tools and use cases wrapped up into a very Apple-centric strategy, right down to the attempted rebranding of “artificial intelligence” as “Apple Intelligence.”

You can find the details on Apple Intelligence here. But the main goal of the strategy seems to be to turn AI into an iPhone feature rather than a standalone product or service. Its hallmark is to keep as much AI processing as is possible on the phone itself, which it does largely by limiting what you can do with Apple Intelligence.

DOJ, FTC Divvy Up AI Antitrust Oversite

With the U.S. Congress seemingly too paralyzed to take meaningful action on artificial intelligence (or really anything else), federal regulators continue to try to step into the breach. Further to our previous post, the Department of Justice and the Federal Trade Commission are close to reaching an agreement on dividing up antitrust scrutiny of the largest AI companies, according to a report in the New York Times and subsequently confirmed by CNN.

Though not yet final, according to the reports, the agreement would see the FTC continue its current probe of OpenAI and Microsoft (along with Amazon and Anthropic) while DOJ will take on Nvidia and continue its oversight of Google. Separately (or perhaps not), the FTC expanded its investigation of Microsoft, seeking information on its recent acquihire and $650 million licensing deal with startup Inflection AI and whether the deal was deliberately structured to skirt federal antitrust review.

Everyone Agrees on the Need to Do Something about Deepfakes, Just Not how to Do It

The BSA Software Alliance, among the heavy hitting tech industry associations in Washington, counts a number of major AI companies among its members, including Adobe and Microsoft. But this week it plans to release a policy statement urging Congress to “take steps” to protect artists from the spread of unauthorized, AI-generated deepfakes. The statement, provided to RightsTech in advance of release, lists eight key principles, including creating a new right for artists to authorize or prevent the commercial dissemination of digital replicas of their name, image, likeness or voice, and prohibiting the commercial trafficking in any algorithm, software, technology or service that has the “primary purpose” of creating or disseminating such replica “knowing that this act was unauthorized.”

AI, Antitrust and Monopsony

Two years ago, the antitrust division of the U.S. Justice Department successfully sued to block Penguin Random House’s proposed acquisition of Simon & Schuster, which would have reduced publishing’s Big Five houses to four. While PRH offered all the usual arguments about greater scale and efficiencies benefitting consumers through lower retail prices for books, the department focused its case on the deal’s potential impact on authors, rather than consumers.

In its briefs, DOJ invoked the rarely discussed doctrine of monopsony, the inverse of monopoly, wherein a few dominant buyers are able to dictate and drive down the prices sellers are able to charge. In this case, the department was concerned with the impact on author’s advances of having one fewer dominant buyer in the market for manuscripts.

Deals Hint at Emerging Market for AI Training Data

For all the sturm und drang around the unlicensed use of copyrighted works to train AI models there sure seem to be a lot of licensing deals being discussed all of a sudden, for access to copyrighted works to train AI models. In just the past two weeks, OpenAI has signed licensing deals with Wall Street Journal publisher News Corp., Vox Media and The Atlantic for access to their archives for training, and we’ve seen reports that Meta, Alphabet and OpenAI are all in conversations with the Hollywood studios about licensing movie and television footage.

In the few months before that, we saw deals between OpenAI and the Financial Times, and Reddit sign with both Google and OpenAI. Photo agencies and archives have also been actively striking deals with AI companies over the past few months, including Shutterstock, Photobucket and Flickr.

He Said, ‘Her’ Said

OpenAI’s Sam Altman did himself no favors when he tweeted out “her” to mark the official unveiling of ChatGPT-4o, the company’s new talking chatbot. The tweet (or whatever we’re supposed to call them these days) appeared to be a reference to the 2013 Oscar-winning film, Her, for which Scarlett Johansson provided the sultry voice of Samantha, an AI assistant, and which Altman has publicly identified as his favorite movie.

It also appeared to confirm that the voice of Sky, one of the five voices available in the chatbot, and one which bears a striking resemblance to that of Johansson’s character in the film, was specifically and intentionally designed to mimic the actress, whether by cloning her voice from recordings or by hiring another actress to imitate her.

The Big Opt-Out: Sony Music Puts AI World on Notice

Sony Music Group last week began sending legal letters to roughly 700 AI developers and music streaming services demanding detailed information on whether, how and by what means the recipients or their affiliates have used SMG-owned content “to train, develop or commercialize any of your AI systems,” and a “description of the manner in which such SMG Content was accessed and/or reproduced and/or extracted by you or your affiliates or any third party contracted to do so on your behalf.”

The text of the letters has not yet been released publicly. Sony separately posted a “Declaration of AI Training Opt-Out” on its website.

Get the latest RightsTech news and analysis delivered directly in your inbox every week
We respect your privacy.