AI Expert Predictions for 2027: A Logical Progression to Crisis

April 3, 2025

A group of AI researchers and forecasting experts just published their best guess of the near future of AI: “AI 2027.” Their roadmap describes a logical progression of AI capabilities that produces a significant potential for catastrophe in just two years. This is the kind of insight that everyone - from policymakers to the general public - should absorb to understand the need for immediate action to mitigate these risks.

AI 2027 was authored by a team of experts, including a former OpenAI researcher whose previous AI predictions have held up well, and informed by feedback from dozens of experts currently working in AI policy/governance and about a dozen currently working at frontier AI companies.

Beginning with predictions about AI capabilities in mid-2025, the authors describe the following progression: 

  • Agents perform basic functions but are expensive and often bungle tasks.
  • Agents become reliable and useful for AI research.
  • Agents optimized for AI R&D are as good as the top human experts.
  • Agents produce AI R&D breakthroughs faster and at lower costs than their human counterparts. 

By late 2027, a major datacenter can hold tens of thousands of AI researchers that are each many times faster than the best human research engineer. The best human AI researchers become spectators to AI systems that are improving too rapidly and too opaquely to follow. And superintelligent AI possesses off-the-charts bioweapons capabilities, persuasion abilities, and even the possibility of an agent “escaping” its datacenter and operating autonomously. Meanwhile, the system's autonomous capabilities and motivations have surpassed the development of safety and control measures.

Then we have a crisis. Governments and AI companies face difficult decisions over whether to pause development to address safety concerns, push ahead to avoid ceding the lead, or even take military action against rivals.

Part of what makes this scenario so compelling is that it presents a convincing case for a rapid escalation in AI capabilities - far beyond what already seems like a rapid escalation today. When AI research is automated and scaled such that years of progress are happening in weeks, small advantages will compound on themselves and become an insurmountable lead within a few months. 

How likely is this scenario? The authors acknowledge their uncertainty, but look at the news. Google employees are being told to “tubocharge” their efforts because “the final race to AGI is afoot.” AI agents’ performance has improved rapidly just in the past few months (as shown in this analysis of Manus performance). AI 2027 is a timeline we need to take seriously. 

Whether superintelligent AI systems emerge in one year or ten, the United States must address potential risks today. We’re already seeing AI systems that provide the public with detailed instructions for how to make chemical weapons, and that’s not the only imminent threat. 

The Center for AI Policy recently offered formal recommendations for the Trump Administration’s AI Action Plan. Chief among those recommendations was for the U.S. to begin conducting national security audits of the most advanced AI systems. CAIP also suggested other actions, such as gathering cyber incident data from AI companies and accelerating AI explainability research. These proposals, as well as others CAIP is pursuing, are aimed to ensure that as we move toward 2027 and beyond, society can fully benefit from AI advances while minimizing catastrophic risks.

The Rapid Rise of Autonomous AI

New research from METR reveals AI’s ability to independently complete tasks is accelerating rapidly.

Read more

Congress Cannot Wait for Other Legislatures To Lead on AI

Congress can rein in Big Tech, and specifically address one of our biggest threats, Artificial Intelligence (AI).

Read more

Reflections from Taiwan

Attending RightsCon, the world’s leading summit on human rights in the digital age.

Read more