Last Monday night (8/19/24), the Democratic Party approved its 2024 Party Platform.
The Democratic Party is clearly aware of the possibility that advanced AI could cause catastrophic damage. At his listening forum in December 2023, Senate Majority Leader Schumer (D-NY) asked, “could AI systems be used to more easily create a deadly novel pathogen or surpass the capabilities of even the smartest humans?”
As Rep. Don Beyer (D-VA) explained to Time magazine, “As long as there are really thoughtful people, like Dr. Hinton or others, who worry about the existential risks of artificial intelligence—the end of humanity—I don't think we can afford to ignore that,” said Beyer. “Even if there's just a one in a 1000 chance, one in a 1000 happens. We see it with hurricanes and storms all the time.”
Based on their platform, it looks like Democrats are imagining that they can solve the problem with voluntary guidelines and best practices. Although they call for an outright ban on voice impersonations, the remainder of the policies in their seven paragraphs on AI seem to lack an enforcement mechanism.
The Center for AI Policy (CAIP) welcomes the Democratic Party’s proposal to “invest in the AI Safety Institute to create guidelines, tools, benchmarks, and best practices for evaluating dangerous capabilities and mitigating AI risk.” However, such investments cannot and will not solve the problem all by themselves, because it is ultimately each company’s choice as to whether to follow the guidelines.
Such flimsy oversight would be readily recognized as insufficient in any other industry: we do not rely on ‘voluntary guidelines’ for combating wildfires, for avoiding food poisoning, or for making sure that airplanes stay in the air.
Why, then, is the Democratic Party apparently content to stick to voluntary guidelines for AI safety? This is not what the voters want. A poll conducted in June 2024 showed that 75% of voters from both parties favor “taking a careful, controlled approach” to AI over “moving forward on AI as fast as possible to be the first country to get extremely powerful AI.” If we leave the decision up to private companies, as we have done so far, then at least one company will always choose to race ahead in pursuit of shareholder profits and fame.
In the long run, the only way to give Americans the careful, controlled approach to AI that they demand and deserve is by working with Congress to pass binding AI safety legislation that levels the playing field and requires all companies to develop AI safely.
Senators Blumenthal (D-CT), Reed (D-RI), Klobuchar (D-MN), and Hickenlooper (D-CO) have already acknowledged the need for mandatory third-party evaluations. CAIP urges the rest of the Democratic Party to join them.
Analyzing present and future military uses of AI
AISI conducted pre-deployment evaluations of Anthropic's Claude 3.5 Sonnet model
Slower AI progress would still move fast enough to radically disrupt American society, culture, and business