Within the next 3-10 years, AI companies could develop "superintelligent" AI: AI that is vastly smarter and more powerful than humans.
AI progress can occur rapidly.
AI systems are already being used to improve coding and engineering efforts. They also show signs of dangerous capabilities, such as hacking, weapons design, persuasion, and strategic planning.
Top AI companies admit that their current practices could be insufficient for handling anticipated future AIÂ systems.
Solving safety research questions requires time, and unchecked competitive pressures could compel companies to prioritize profits over safety.
We need to prevent an AI arms race so that we have enough time to solve safety challenges before building catastrophically powerful systems.
This requires:
That's why we’re calling for:
Countless American sports fans are suffering emotional despair and financial ruin as a result of AI-powered online sports betting
Reclaim safety as the focus of international conversations on AI