Within the next 3-10 years, AI companies could develop "superintelligent" AI: AI that is vastly smarter and more powerful than humans.
AI progress can occur rapidly.
AI systems are already being used to improve coding and engineering efforts. They also show signs of dangerous capabilities, such as hacking, weapons design, persuasion, and strategic planning.
Top AI companies admit that their current practices could be insufficient for handling anticipated future AI systems.
Solving safety research questions requires time, and unchecked competitive pressures could compel companies to prioritize profits over safety.
We need to prevent an AI arms race so that we have enough time to solve safety challenges before building catastrophically powerful systems.
This requires:
That's why we’re calling for:
Analyzing present and future military uses of AI
AISI conducted pre-deployment evaluations of Anthropic's Claude 3.5 Sonnet model
Slower AI progress would still move fast enough to radically disrupt American society, culture, and business