Influential Safety Researcher Sounds Alarm on OpenAI's Failure to Take Security Seriously

Jason Green-Lowe
,
June 4, 2024

The Center for AI Policy applauds Leopold Aschenbrenner, a safety researcher at OpenAI who was allegedly fired for sounding the alarm about OpenAI’s failure to take security seriously.

Aschenbrenner has just published a thorough, readable, and evidence-based argument that AI systems will grow dramatically more capable by 2027

Aschenbrenner's findings suggest that AI systems could possess intellectual capabilities comparable to those of a professional computer scientist. This development, he argues, would enable AI to self-improve at an unprecedented rate, leaving humans struggling to keep pace with the exponential progress. The potential consequences of such advancements, without proper safety precautions in place, are both profound and alarming.

Over and over again, experts have guessed that deep learning will run into a brick wall, and they’ve been wrong every time: artificial intelligence keeps growing more powerful, and there’s no good reason to think that this trend will stop. In just a few years, advanced AI systems could have roughly as much brainpower where AI can be used to improve its own capabilities. This will take the already fast pace of current tech progress and supercharge it to a point where humans will struggle to keep up.

Even in the best case scenario, where companies make AI safety a top priority, America is  going to be in for a very bumpy ride, because aligning super-powerful artificial intelligence with human values is extremely challenging. 

As Aschenbrenner puts it, “we won’t have any hope of understanding what our billion superintelligences are doing (except as they might choose to explain to us, like they might to a child). And we don’t yet have the technical ability to reliably guarantee even basic side constraints for these systems, like ‘don’t lie’ or ‘follow the law.’”

Unfortunately, companies like OpenAI are not making AI safety a priority at all. Instead, the people heading up their internal safety teams are resigning while saying that they lacked the necessary resources and were "sailing against the wind." The stakes are too high to trust tech companies to follow voluntary standards: it's well past time that we regulate the field.

Biden and Xi’s Statement on AI and Nuclear Is Just the Tip of the Iceberg

Analyzing present and future military uses of AI

November 21, 2024
Learn More
Read more

Bio Risks and Broken Guardrails: What the AISI Report Tells Us About AI Safety Standards

AISI conducted pre-deployment evaluations of Anthropic's Claude 3.5 Sonnet model

November 20, 2024
Learn More
Read more

Slower Scaling Gives Us Barely Enough Time To Invent Safe AI

Slower AI progress would still move fast enough to radically disrupt American society, culture, and business

November 20, 2024
Learn More
Read more