Influential Safety Researcher Sounds Alarm on OpenAI's Failure to Take Security Seriously

Jason Green-Lowe
,
June 4, 2024

The Center for AI Policy applauds Leopold Aschenbrenner, a safety researcher at OpenAI who was allegedly fired for sounding the alarm about OpenAI’s failure to take security seriously.

Aschenbrenner has just published a thorough, readable, and evidence-based argument that AI systems will grow dramatically more capable by 2027

Aschenbrenner's findings suggest that AI systems could possess intellectual capabilities comparable to those of a professional computer scientist. This development, he argues, would enable AI to self-improve at an unprecedented rate, leaving humans struggling to keep pace with the exponential progress. The potential consequences of such advancements, without proper safety precautions in place, are both profound and alarming.

Over and over again, experts have guessed that deep learning will run into a brick wall, and they’ve been wrong every time: artificial intelligence keeps growing more powerful, and there’s no good reason to think that this trend will stop. In just a few years, advanced AI systems could have roughly as much brainpower where AI can be used to improve its own capabilities. This will take the already fast pace of current tech progress and supercharge it to a point where humans will struggle to keep up.

Even in the best case scenario, where companies make AI safety a top priority, America is  going to be in for a very bumpy ride, because aligning super-powerful artificial intelligence with human values is extremely challenging. 

As Aschenbrenner puts it, “we won’t have any hope of understanding what our billion superintelligences are doing (except as they might choose to explain to us, like they might to a child). And we don’t yet have the technical ability to reliably guarantee even basic side constraints for these systems, like ‘don’t lie’ or ‘follow the law.’”

Unfortunately, companies like OpenAI are not making AI safety a priority at all. Instead, the people heading up their internal safety teams are resigning while saying that they lacked the necessary resources and were "sailing against the wind." The stakes are too high to trust tech companies to follow voluntary standards: it's well past time that we regulate the field.

The House That AI Built

Countless American sports fans are suffering emotional despair and financial ruin as a result of AI-powered online sports betting

October 31, 2024
Learn More
Read more

"AI'll Be Right Back"

5 horror movie tropes that explain trends in AI

October 28, 2024
Learn More
Read more

A Recommendation for the First Meeting of AI Safety Institutes

Reclaim safety as the focus of international conversations on AI

October 24, 2024
Learn More
Read more