AI Safety and the US-China Arms Race

Claudia Wilson
,
October 29, 2024
Read the Full Report

We can’t lose to China. When discussing AI governance, this one line is often produced as a conversation-ending trump card. The rationale goes that AI safety requirements will undermine the US’s efforts in technology competition with China. Common wisdom holds that regulation will stymie market competition and innovation through exorbitant compliance costs. Innovative technologies drive productivity, and subsequent economic growth manifests as national power. Advanced technologies are also crucial to ensure the US has a military advantage. 

This paper analyzes the claim that AI safety and US primacy are direct trade-offs. Zero-sum competition is a dangerous framework and one to navigate carefully. However, for the sake of scope, we place these concerns to the side and ask the question: Can the US still “win” against China if it introduces AI safety regulation? We interpret “winning” as the US leading in AI innovation, translating such innovation into economic growth, and developing and fielding superior military technologies. We find that AI safety measures will not hinder the US in an “AI race” because:

  1. Inexpensive AI safety costs are unlikely to impact AI innovation.
  2. Safer AI can drive trust and thus adoption, which is crucial for economic growth.
  3. Safety requirements on public models will not impede military capabilities. 

Our analysis focuses on mandatory pre-deployment evaluations, which are a series of techniques to assess risk prior to releasing a model to the broader public. By limiting these requirements to the developers of the most powerful models, such evaluations would likely cost less than 0.5% of upfront training costs. The relative compliance cost would be even smaller if training costs reach the predicted one billion dollars by 2027. Expensive training costs and the current market landscape also suggest that small companies will not be burdened by these compliance costs since they cannot afford to operate in this market, regardless of regulation. 

We also find that introducing pre-deployment evaluations could increase public trust in AI, drive greater adoption in business settings, and thus spur economic growth. Finally, we conclude that pre-deployment evaluations have no effect on the government's ability to develop and deploy military and intelligence AI models, since the government could feasibly develop partnerships to access models prior to them being released to the public.

If the US wants to engage in strategic competition against China, there are more effective methods at its disposal. To maintain a lead in innovation, the US could supplement hardware industrial policy with targeted immigration reform to increase the supply of technical talent. To drive adoption and economic growth, the government can invest in upskilling business users of AI and facilitating partnerships between them and developers of AI. To ensure that US security agencies have cutting-edge technology, the government should continue partnerships with specialized military technology innovators and provide support in navigating complicated government contracting requirements. The US can and should also strive towards maintaining international leadership through clear-eyed diplomacy, economic growth, and representing democracy as a better model of society than alternatives.

Each of these strategies is more effective and less risky than leaving AI entirely unregulated. A “wait and see” approach would endanger public safety while doing little to improve American national competitiveness. When it comes to AI safety and national security, there is no real tradeoff: they are compatible.

Read the full report here.

AI Alignment in Mitigating Risk: Frameworks for Benchmarking and Improvement

Policymakers and engineers should prioritize alignment innovation as AI rapidly develops

October 7, 2024
Learn More
Read more

Healthcare Privacy in the Age of AI: Guidelines and Recommendations

The rapid growth of AI creates areas of concern in the field of data privacy, particularly for healthcare data

October 4, 2024
Learn More
Read more

Decoding AI Decision-Making: New Insights and Policy Approaches

An overview of AI explainability concepts and techniques, along with recommendations for reasonable policies to mitigate risk while maximizing the benefits of these powerful technologies

September 26, 2024
Learn More
Read more