Two years ago, the antitrust division of the U.S. Justice Department successfully sued to block Penguin Random House’s proposed acquisition of Simon & Schuster, which would have reduced publishing’s Big Five houses to four. While PRH offered all the usual arguments about greater scale and efficiencies benefitting consumers through lower retail prices for books, the department focused its case on the deal’s potential impact on authors, rather than consumers.
In its briefs, DOJ invoked the rarely discussed doctrine of monopsony, the inverse of monopoly, wherein a few dominant buyers are able to dictate and drive down the prices sellers are able to charge. In this case, the department was concerned with the impact on author’s advances of having one fewer dominant buyer in the market for manuscripts.
In a sweeping, 80-page opinion, U.S. District Court Judge Florence Pan accepted DOJ’s framing of the case, noting all of the factors typically considered in cases of alleged monopoly — the existence of a discreet market, a party with market power, the potential harm to competition in the relevant market — can apply equally under the statutes to conditions of monopsony and therefore be barred.
Last week, Assistant Attorney General Jonathan Kanter, head of the antitrust division, again warned of the dangers of monopsony, this time regarding generative AI companies. Speaking at a conference at Stanford University on the economic impact of generative AI systems, Kanter warned AI companies they could be facing action by his department if they do not find a way to fairly compensate artists, performers and other creators for the use of their work to train generative AI models.
“What incentive will tomorrow’s writers, creators, journalists, thinkers, and artists have if AI has the ability to extract their ingenuity without appropriate compensation?” Kanter asked, rhetorically. “Absent competition to adequately compensate creators for their works, AI companies could exploit monopsony power on levels that we have never seen before, with devastating consequences.”
As I noted in a post on the PRH/S&S deal, that case, and now Kanter’s warning, reflect an emerging analysis of competitive harm that focuses on the buy side of markets rather than exclusively on the consumer-price focused sell side that has dominated antitrust thinking and enforcement for the past 30 years. And it seems particularly relevant in markets for creative works and rights, where the structure of those industries creates significant asymmetries in economic power between artists on the one hand and media company buyers on the other.
In 2021, a British parliamentary inquiry into the economics of music streaming concluded that artists are at significantly disadvantaged by the structure of the business.
“Streaming has undoubtedly helped save the music industry following two decades of digital piracy but it is clear that what has been saved does not work for everyone,” the Digital, Culture, Media and Sports Committee wrote in a report on the inquiry. “The issues ostensibly created by streaming simply reflect more fundamental, structural problems within the recorded music industry. Streaming needs a complete reset.”
Similar structural problems are now also becoming apparent in the market for generative AI training materials, and have clearly caught the eye of regulators.
As noted here in previous posts, the Federal Trade Commission, which shares antitrust enforcement responsibility with DOJ, has focused extensively on how generative AI’s ability to mimic and artist’s voice or style could potentially constitute an unfair method of competition or an unfair or deceptive practice.
The agency has also opened an investigation into licensing deals between publishers and AI companies and appears to be interested in whether the largest AI companies will be able to exclude smaller competitors from the market for the most valuable training data, eventually ushering in monopsony conditions.
In comments submitted to the U.S. Copyright Office’s inquiry into AI last year, the FTC also addressed potential harms to creators from AI’s ability to mimic an artist’s voice or style and whether that could constitute an unfair or deceptive practice.
“[N]ot only may creators’ ability to compete be unfairly harmed, but consumers may be deceived when authorship does not align with consumer expectations, such as when a consumer thinks a work has been created by a particular musician or other artist but it has been generated by someone else using an AI tool,” the agency wrote. “In addition, conduct that may be consistent with the copyright laws nevertheless may violate Section 5.17 [of the FTC Act]. Many large technology firms possess vast financial resources that enable them to indemnify the users of their generative AI tools or obtain exclusive licenses to copyrighted (or otherwise proprietary) training data, potentially further entrenching the market power of these dominant firms.”
The FTC drew criticism from some corners for straying out of its lane in attempting to conflate matters of copyright law, where it lacks jurisdiction, with competition law, where it does. But with the Justice Department now also voicing concern over the potentially harmful impact on artists and creators from monopsony conditions in the market for AI training data that criticism will likely be muted.
All of which is bad news from AI companies already battling a flood of copyright litigation. There is genuine and legitimate dispute over whether and how AI training falls into existing copyright categories, some of which may ultimately require new legislation to resolve. But antitrust and consumer protection laws are already on the books and well-tested in court. Any litigation based on those laws, moreover, would likely be brought by state and federal government agencies, not private plaintiffs, where the power and resource asymmetries are likely to work the other way.