AI Safety Is Becoming AI Security

February 14, 2025

This week, the UK’s AI Safety Institute (AISI) rebranded itself the AI Security Institute. Of course, a name change alone can be just that. However, in this case, rebranding reflects evolving perspectives on AI risks and potentially dangerous narrowing of focus.

English is one of the few languages that differentiates between safety and security. Safety refers to prevention of general harms, whereas security often implies some sort of adversarial element. We use safety when discussing things that could be inherently faulty (e.g., aviation, nuclear, engineering) and security when we are in fear of attack (e.g., national security, airport security). Interestingly, Chinese does not make this distinction, using one word (ānquán) for both safety and security, including in discussions of AI. 

This name change reflects two broader trends in AI policy. First, technology is increasingly viewed as a form of national power. American policymakers are concerned about the US having superior AI capabilities and preventing Chinese counterparts from “catching up” or misusing American advanced technologies. Second, in the minds of some, the term AI safety has expanded to include concepts like emotional harm, a highly partisan issue. 

AISI’s clarification of their mission substantiates these trends. Their new focus will be limited to forms of harm that have a clear perpetrator—biological weapons, cyberattacks, fraud, and child sexual abuse. Bias and freedom of speech are explicitly not in scope. 

While the merit of preventing bias can be debated, this scoping neglects a pressing form of objective harm. As we increasingly incorporate AI into every aspect of society, as with any other technology, there is a risk that it does not perform as expected. AI has already demonstrated strategic deception and resistance to being shut down. If capabilities keep improving at the current pace, we will likely delegate more and more decision-making to AI systems. Should these models malfunction when deployed in water management, electricity grids, financial markets, let alone the battlefield, the consequences could be devastating. 

Focusing exclusively on AI misuse is equivalent to ignoring the safety of an airplane and only worrying about who the pilot is. Even amongst current geopolitical competition, there is no value in having AI systems that are unreliable and dangerous.  

American policymakers may be tempted to copy the UK rebrand. That’s fine. But it’s not worth wasting time debating nomenclature. What’s more important is that the US AI Safety Institute (or potentially the American AI Security Institute) remain focused on preventing both misuse of AI and any technical risks inherent in the AI systems. 

The Rapid Rise of Autonomous AI

New research from METR reveals AI’s ability to independently complete tasks is accelerating rapidly.

Read more

Congress Cannot Wait for Other Legislatures To Lead on AI

Congress can rein in Big Tech, and specifically address one of our biggest threats, Artificial Intelligence (AI).

Read more

Reflections from Taiwan

Attending RightsCon, the world’s leading summit on human rights in the digital age.

Read more