Report on Autonomous Weapons and AI Policy

Tristan Williams
,
July 29, 2024

Report Summary: Autonomous Weapons

Autonomous weapons are here. Weapons that can select and engage targets without further intervention are already here, from robot dogs with sniper rifles, to automated machine guns, to kamikaze drones, likely already having claimed their first kill

Development is ramping up. The Pentagon has already put someone in charge of “algorithmic warfare” and has requested over $3 billion for AI-related activities in 2024. The story is the same globally with spending on military robotics estimated at around $7.5 billion in 2015 and expected to grow to over $16 billion

Autonomous weapons (AWs) bring many different potential harms, including: 

  • Increased military conflict and death as both starting a war and pulling the trigger become easier and potentially more frequent. 
  • Putting military grade destruction in the hands of civilians.
  • Loss of an ability to resist authoritarianism, as AWs stand to shift the balance of power away from our ability to bear arms guaranteeing a check on power, giving a small authoritarian group unprecedented power to subjugate the rest.
  • Unintended destruction or escalation as a result of losing control of AWs. 

Keeping a human in the loop isn’t a panacea. Restricting all autonomous weapons development to systems which require heavy human involvement is not a sustainable solution because adversaries are unlikely to restrict themselves similarly. The U.S. will likely reduce human involvement in response, so as not to lose its tactical advantage.

Guardrails on further development are needed, and there are many options:

  • We can start with policies that have broad appeal, like a commitment to thoroughly testing AWs before fielding them in battle, or a commitment to keep actions surrounding nuclear weapons in human hands
  • Binding treaties will be necessary, and the current venue for discussing them (the UN CCW) is unlikely to achieve one soon. One promising direction is encouraging NATO to take a leading role here, potentially serving as a staging ground for further conversations and developing guidance for further development through its “principles for responsible use”.
  • Domestically we’re currently quite vulnerable to AWs like drones. To better protect our public spaces we’ll need clarity on the jurisdictional mess that currently responds to such incidents, and further research and development towards cheap and scalable ways to detect and respond to potential threats.

Read the full report here.

AI Safety and the US-China Arms Race

Inspecting the claim that AI safety and US primacy are direct trade-offs

October 29, 2024
Learn More
Read more

AI Alignment in Mitigating Risk: Frameworks for Benchmarking and Improvement

Policymakers and engineers should prioritize alignment innovation as AI rapidly develops

October 7, 2024
Learn More
Read more

Healthcare Privacy in the Age of AI: Guidelines and Recommendations

The rapid growth of AI creates areas of concern in the field of data privacy, particularly for healthcare data

October 4, 2024
Learn More
Read more