OpenAI Safety Team's Departure is a Fire Alarm

May 20, 2024

Jan Leike resigned from his position as co-chair of OpenAI’s safety team on Tuesday, May 14th, joining an exodus started by his fellow safety leader Ilya Sutskever and top safety researchers Leopold Aschenbrenner and Pavel Izmailov.

At first, Leike’s only statement was literally:

“I resigned.” As Kelsey Piper has documented, OpenAI’s extremely strict nondisparagement agreements mean that an ex-employee who says anything at all about why they resigned is probably giving up millions of dollars in vested stock options. Last Friday, Leike apparently accepted that penalty for the benefit of humanity, and tweeted out a detailed explanation of what he saw wrong at OpenAI and what prompted him to leave.

As Leike puts it, “over the past years, safety culture and processes have taken a backseat to shiny products.” Leike points out that he struggled to get enough compute to test his company’s products for safety. “We urgently need to figure out how to steer and control AI systems much smarter than us,” but Leike is “concerned we aren't on a trajectory to get there.”

Cynics sometimes complain that the only reason anyone sounds the alarm about AI safety risks is to enrich themselves with anti-competitive regulation. Those cynics can’t explain why the senior half of OpenAI’s safety team have all quit in the past three months, giving up their fat salaries and risking their equity. If the concern about safety is just a ruse, then why are all the safety engineers quitting? Leike is putting his money where his mouth is. He and his peers are sacrificing huge paydays to bring us what they see as the true story behind OpenAI’s reckless pursuit of the next hot product.

The responsible thing to do is believe him: advanced general-purpose AI is extremely dangerous, and we can’t trust private companies to adequately manage those dangers. The only way to protect the public against the risks of advanced AI is with binding government regulations, like the policies proposed in CAIP’s model legislation.

The Rapid Rise of Autonomous AI

New research from METR reveals AI’s ability to independently complete tasks is accelerating rapidly.

Read more

Congress Cannot Wait for Other Legislatures To Lead on AI

Congress can rein in Big Tech, and specifically address one of our biggest threats, Artificial Intelligence (AI).

Read more

Reflections from Taiwan

Attending RightsCon, the world’s leading summit on human rights in the digital age.

Read more