Within the next 3-10 years, AI companies could develop "superintelligent" AI: AI that is vastly smarter and more powerful than humans.
AI progress can occur rapidly.
AI systems are already being used to improve coding and engineering efforts. They also show signs of dangerous capabilities, such as hacking, weapons design, persuasion, and strategic planning.
Top AI companies admit that their current practices could be insufficient for handling anticipated future AI systems.
Solving safety research questions requires time, and unchecked competitive pressures could compel companies to prioritize profits over safety.
We need to prevent an AI arms race so that we have enough time to solve safety challenges before building catastrophically powerful systems.
This requires:
That's why we’re calling for:
New research from METR reveals AI’s ability to independently complete tasks is accelerating rapidly.
Congress can rein in Big Tech, and specifically address one of our biggest threats, Artificial Intelligence (AI).
Attending RightsCon, the world’s leading summit on human rights in the digital age.