Political campaigns should disclose when they use AI-generated content on radio and television. You’d be forgiven for thinking that this idea wouldn’t be that contentious. Indeed, we’ve seen bipartisan action on this topic, including an adopted disclosure bill in New Hampshire and a bill at the federal level that would go further and ban deceptive AI in political messaging. Yet, a proposed Federal Communications Commission (FCC) rule has been surrounded by a flurry of contention.
Some of this dissent is internal. FCC commissioners Brendan Carr and Nathan Simington separately argued that the “FCC is not the right entity” to introduce such requirements, but also that such disclosures represent a “broader effort to control political speech.” Both commissioners also aired concerns that the FCC had not clearly defined AI-generated content. For example, AI tools could be used to simply clarify the image or sound quality of a video without substantially changing the content of a video.
The Federal Election Commission (FEC) Chairman Sean Cooksey warned that the rule “would invade the FEC’s jurisdiction” and “lacks the legal authority.” In Congress, Senators Lee and Budd have proposed a bill explicitly banning the FCC from enforcing transparency around AI in political advertising.
It’s certainly fair to have substantive suggestions and even arguments against the rule. In fact, that’s the very purpose of a public comment period. Complaints that the definition of AI was not specific enough are easily addressed through amending the language of the rule. A final version of the rule could better specify whether it includes AI-enhanced imagery, such as standard photoshop technology. Another concern is that the rule does not address AI-generated content in online advertising by campaigns. This scope is a function of the FCC’s limited powers, but if another body introduces the rule, then that body should probably include online campaigning.
However, the most interesting issue is not the merits of this specific proposed rule, and it is not the question of which agency should be regulating election communications. The most burning question is: how can we ensure that regulations evolve to account for the new threats posed by AI? If opponents are truly concerned that the FCC is not the right place to enforce transparency, that’s fine. However, the government cannot simply stand aside while technology fundamentally changes the risk landscape. We need someone in the federal government to address the implications of AI-generated content for political advertisements - be it the FCC, the FEC, another agency, and/or Congress.
Taking a step back, the discourse around this proposal reflects the broader challenge for AI safety in the US - no one is responsible. The Center for AI Policy proposed a dedicated federal agency for AI safety in our model legislation, but in lieu of that, we need Congress and individual agencies to step up. Without federal leadership, we’ll be left floundering and quibbling over technicalities.
Analyzing present and future military uses of AI
AISI conducted pre-deployment evaluations of Anthropic's Claude 3.5 Sonnet model
Slower AI progress would still move fast enough to radically disrupt American society, culture, and business