Negotiations toward a final text of the European Union’s AI Act are going down to the wire this week as the final “trilogue” session among the EU Parliament, Commission and Council is scheduled for Wednesday (Dec. 6). The pressure is on to reach an agreement before the end of the year, as the June 2024 EU Parliamentary elections loom over talks. If agreement can’t be reached before then, there’s a danger that the process would have to be restarted with a new Parliament and new leadership in the Council, which could potentially scuttle the whole project.
Yet despite the pressure, the parties to the current talks appear to be farther apart than where they started, endangering what had been touted as the world’s first comprehensive regulatory regime for of AI. The consensus on the basic structure of the proposed regulations that seemed at hand in the summer was thrown into turmoil last month when France, supported by Germany and Italy, suddenly reversed its position and embraced “mandatory self-regulation” via codes of conduct for the largest foundation models instead of the once-agreed tiered system of binding obligations.
Accounts of what was behind the sudden volte face by the EU’s three largest economies vary. Some blame an intense lobbying blitz mounted by U.S. tech giants in Brussels and national capitals. Microsoft, Google and OpenAI, despite the lip service they’ve paid to the need for regulation of AI, were three of the top five biggest lobbying spenders in European capitals in 2023, according to the Corporate Europe Observatory. OpenAI’s once-and-future CEO Sam Altman even suggested earlier this year that the ChatGPT maker could leave Europe if it’s subjected to the strictest rules, although he later walked those comments back.
Others point the finger at Cédric O, France’s digital state secretary and strong proponent of regulation until May 2022. who left government to become a co-founder, advisor and chief lobbyist for Mistral AI, a home-grown AI company. O is credited with organizing a letter signed by the CEOs of several major EU companies, including Airbus and Siemens, along with Mistral, raising concerns that the agreed framework of the AI Act would limit development of AI technology on the Continent and constrain European startups’ ability to catch up with the American giants.
Whatever the real reason, one provision of the framework from which France et. al. seek to exempt developers of generative AI foundation models is a requirement to document and disclose details of the data used to train their models. As discussed here in a previous post, foundation developers have lately been getting less transparent about their methods, not more, as they face a growing pile of lawsuits over their use of copyrighted works in training.
Transparency onto training datasets has been a key demand of artists and copyright owners in the debate around regulating AI. It has featured prominently in public comments from rights-owner group’s filed with the U.S. Copyright Office, and last month, a group of a baker’s dozen organizations representing European creative communities sent up an urgent plea to preserve the transparency requirement in the AI Act.
“[W]e urge all policy makers to prioritise maximum transparency on training data and artificially generated content to provide guarantees to citizens, authors and performers that their rights are respected,” the group said in a joint statement. “Transparency is a prerequisite for both innovation and creation to continue to grow for the benefit of all.”
In response to the French-German-Italian about-face, the European Commission, the EU’s executive arm, floated a proposed compromise text that preserved most of the structure of the original draft but relaxed some of the requirements on foundation models, including full transparency onto training data. The EU Parliament, however, dug in its heels on its insistence that binding requirements for foundation models remain in the Act.
The presidency of the EU Council, currently held by Spain, then floated a new proposed compromise that would maintain some of the original mandatory obligations on “general purpose” (i.e. foundation) models, including a requirement that AI-generated or manipulated content be detectable.
As for training, the Spanish proposal would require developers to put “adequate measures” in place to comply with legislative requirements and to publish a summary of their training data.
The betting line in Brussels currently is that the negotiators will ultimately find a solution all parties can live with, if only to avoid the specter of having to start all over again after the parliamentary elections. If they can’t, however, the EU could squander its chance to establish a global benchmark for AI regulation, leaving the field for others to take the lead.
The U.S., U.K. and China have all taken unilateral steps toward creating a regulatory framework for AI, and transnational organizations such as the G7 have also begun discussing the issue.
For the most part, however, those other initiatives are focused on AI safety and national economic competitiveness, not intellectual-property concerns. For artists, authors and rights owners, passage of the EU AI Act still represents the best chance to establish copyright concerns as a central pillar of global AI regulation. Should it crash and burn it could be a long wait for another such opportunity.
(Update: As of Wednesday afternoon CET, optimism was rising among negotiators that a deal could be reached by that evening. The EU Parliament had scheduled a press conference for Thursday morning CET to discuss the “outcome of [the] negotiations”).