OpenAI Safety Team's Departure is a Fire Alarm

Jason Green-Lowe
,
May 20, 2024

Jan Leike resigned from his position as co-chair of OpenAI’s safety team on Tuesday, May 14th, joining an exodus started by his fellow safety leader Ilya Sutskever and top safety researchers Leopold Aschenbrenner and Pavel Izmailov.

At first, Leike’s only statement was literally:

“I resigned.” As Kelsey Piper has documented, OpenAI’s extremely strict nondisparagement agreements mean that an ex-employee who says anything at all about why they resigned is probably giving up millions of dollars in vested stock options. Last Friday, Leike apparently accepted that penalty for the benefit of humanity, and tweeted out a detailed explanation of what he saw wrong at OpenAI and what prompted him to leave.

As Leike puts it, “over the past years, safety culture and processes have taken a backseat to shiny products.” Leike points out that he struggled to get enough compute to test his company’s products for safety. “We urgently need to figure out how to steer and control AI systems much smarter than us,” but Leike is “concerned we aren't on a trajectory to get there.”

Cynics sometimes complain that the only reason anyone sounds the alarm about AI safety risks is to enrich themselves with anti-competitive regulation. Those cynics can’t explain why the senior half of OpenAI’s safety team have all quit in the past three months, giving up their fat salaries and risking their equity. If the concern about safety is just a ruse, then why are all the safety engineers quitting? Leike is putting his money where his mouth is. He and his peers are sacrificing huge paydays to bring us what they see as the true story behind OpenAI’s reckless pursuit of the next hot product.

The responsible thing to do is believe him: advanced general-purpose AI is extremely dangerous, and we can’t trust private companies to adequately manage those dangers. The only way to protect the public against the risks of advanced AI is with binding government regulations, like the policies proposed in CAIP’s model legislation.

Biden and Xi’s Statement on AI and Nuclear Is Just the Tip of the Iceberg

Analyzing present and future military uses of AI

November 21, 2024
Learn More
Read more

Bio Risks and Broken Guardrails: What the AISI Report Tells Us About AI Safety Standards

AISI conducted pre-deployment evaluations of Anthropic's Claude 3.5 Sonnet model

November 20, 2024
Learn More
Read more

Slower Scaling Gives Us Barely Enough Time To Invent Safe AI

Slower AI progress would still move fast enough to radically disrupt American society, culture, and business

November 20, 2024
Learn More
Read more