CAIP Responds to Reported Mass Layoffs at NIST's AI Safety Institute

February 19, 2025

Washington, DC — In response to reports of imminent mass layoffs at the National Institute of Standards and Technology (NIST), including significant staff reductions at the US AI Safety Institute,  Jason Green-Lowe, executive director of the Center for AI Policy (CAIP), a nonpartisan research organization dedicated to mitigating the catastrophic risks of AI through policy development and advocacy, released the following statement:

“The reported plans to terminate approximately 500 probationary staff members at NIST pose an alarming threat to our nation's ability to develop effective and responsible AI. These cuts, if confirmed, would severely impact the government's capacity to research and address critical AI safety concerns at a time when such expertise is more vital than ever.

“The potential gutting of NIST’s AI Safety Institute defies common sense and puts Americans at risk. Whatever the merits of cutting probationary staff in other departments, it is not appropriate at NIST. Because the government has only recently begun serious work on AI safety, most of the experts at NIST who understand the catastrophic risks posed by advanced AI have only recently left the private sector. Throwing them out of the government deprives us of the eyes and ears we need to identify when AI is likely to trigger nuclear and biological risks. The savings would be trivial, but the cost to our national security would be immense.”

“CAIP calls on senior members of the Trump administration to take a second look at the effects of these staffing cuts and to issue appropriate exceptions. Just last week, Elon Musk made a $97 billion bid to force “OpenAI to return to the open-source, safety-focused force for good it once was.” Under his guidance, DOGE should retain the AI experts at NIST who are needed to ensure that American AI remains a force for good.

“Commerce Secretary Lutnick, who was only confirmed yesterday, must act quickly to learn about the vital work being done by his department and to publicly explain how he will protect that work from the unintended consequences of broad budget cuts. Efficiency is one thing; thoughtlessly crippling the only office protecting us against catastrophe is another.”

The Center for AI Policy (CAIP) is a nonpartisan research organization dedicated to mitigating the catastrophic risks of AI through policy development and advocacy. Based in Washington, DC, CAIP works to ensure AI is developed and implemented with effective safety standards. Learn more at centeraipolicy.org.

###

CAIP Convenes Tabletop Exercise on AI Threats to Emergency Response

"The discussions at this tabletop exercise should be a wake-up call for government officials: the threat from AI is outpacing our preparedness."

Read more

CAIP Statement on Michael Kratsios and Sriram Krishnan Being Named to Key White House Technology Roles

CAIP looks forward to working with the Trump administration to promote common-sense AI policies

Read more

CAIP Applauds the Romney-Led, Bipartisan Bill to Address Catastrophic AI Risks

This legislation buys us a great deal of security at little or no cost to innovation

Read more