Ignoring AI Threats Doesn’t Make Them Go Away

Brian Waldrip
,
Makeda Heman-Ackah
September 30, 2024

Earlier this month, the Senate Select Committee on Intelligence, normally a very discreet panel, bipartisanly, publicly, and loudly implored the American people and the private sector to remain vigilant against election interference. As Chairman Mark Warner (D-VA) said in the hearing, Foreign Threats to the 2024 Elections, “[T]his is really our effort to try to urge [technology companies] to do more - to kind of alert the public that this problem has not gone away.”

The Center for AI Policy (CAIP) is very supportive of this effort. With an emphasis on artificial intelligence (AI) safety, CAIP shares the committee’s and participants’ concerns about how increasingly capable technologies are being illicitly used to undermine US elections and sow domestic division.

Just weeks before the hearing, the US Department of Justice filed an indictment against Russian conspirators bankrolling social media influencers. From past operations to ongoing efforts, Russia was mentioned over 60 times during the hearing and linked to antidemocratic efforts, not only in the United States, but also in interference operations around the world. 

It’s not just Russia, of course. Sen. Susan Collins (R-ME) highlighted intelligence showing that China is interfering with “down ballot races at the state level, county level, local level.” This is particularly troubling because resources are often less available to guard against interference in these races. We know about Chinese espionage efforts to influence a staffer in the New York governor’s office – there are almost surely similar plots we have not yet discovered in states that have fewer investigative journalists.

Iran and North Korea were also implicated by the committee and panelists for their election meddling and disinformation campaigns.

Leading technology companies agree that AI-generated deepfakes are an ongoing problem. Alphabet President and Chief Legal Officer Kent Walker said, “We remain on the lookout for new tactics and techniques in both cybersecurity and disinformation campaigns.” Microsoft’s Vice Chair and President Brad Smith concurred, noting that “the most perilous moment will come, I think, 48 hours before the election. That's the lesson to be learned from, say, the Slovakian election last fall and other races we have seen.”

Unfortunately, just because Big Tech companies are aware of the risks posed by advanced AI doesn’t mean that they will act to prevent them. At a hearing held by the Subcommittee on Privacy, Technology, and the Law, Sen. Richard Blumenthal (D-CT) described the current AI landscape as the “Wild West,” pointing out that “the incentives for a race to the bottom are overwhelming. Companies, even as we speak, are cutting corners and pulling back on efforts” to prevent catastrophic harms.

This is the automatic result of Big Tech’s corporate culture. As Margaret Mitchell, former Research Scientist at Google testified, “internal incentives around promotions and raises pushed workers to launch as much as possible…without incorporating different perspectives or developing systems that were informed by the impacts of the technology.”

Similarly, former OpenAI Superalignment Researcher William Sanders complained that OpenAI has “repeatedly prioritized deployment over rigor,” even as they develop AI models that can help reproduce biological weapons. Sanders described his safety team’s experience of being asked to “figure it out as we went along” as “terrifying” in light of the seriousness of the threat – but today, the situation is even worse. What was once an undersized team is now even smaller, as “its leaders and many key researchers resigned after struggling to get the resources they needed to be successful.”

Sanders noted that “This is true not just with OpenAI; the incentives to prioritize rapid development apply to the entire industry. This is why a policy response is needed.”

The Center for AI Policy (CAIP) couldn’t agree more. Unless and until Congress passes legislation to stop it, Big Tech will continue to release products that threaten the integrity of our elections and threaten our public health. 

Sen. Blumenthal (D-CT) and Sen. Josh Hawley (R-MO) did the right thing in 2023 by releasing a one-page “framework” that would address these threats. Since then, we still haven’t seen even a draft of the legislative text to implement that framework. CAIP calls on Sens. Blumenthal and Hawley to promptly publish their bill and keep the legislative process moving. Congress may have gotten bogged down this year, but the threats from AI are still approaching full steam ahead.

The House That AI Built

Countless American sports fans are suffering emotional despair and financial ruin as a result of AI-powered online sports betting

October 31, 2024
Learn More
Read more

"AI'll Be Right Back"

5 horror movie tropes that explain trends in AI

October 28, 2024
Learn More
Read more

A Recommendation for the First Meeting of AI Safety Institutes

Reclaim safety as the focus of international conversations on AI

October 24, 2024
Learn More
Read more