Autonomous weapons are here. Weapons that can select and engage targets without further intervention are already here, from robot dogs with sniper rifles, to automated machine guns, to kamikaze drones, likely already having claimed their first kill.
Development is ramping up. The Pentagon has already put someone in charge of “algorithmic warfare” and has requested over $3 billion for AI-related activities in 2024. The story is the same globally with spending on military robotics estimated at around $7.5 billion in 2015 and expected to grow to over $16 billion.
Autonomous weapons (AWs) bring many different potential harms, including:
Keeping a human in the loop isn’t a panacea. Restricting all autonomous weapons development to systems which require heavy human involvement is not a sustainable solution because adversaries are unlikely to restrict themselves similarly. The U.S. will likely reduce human involvement in response, so as not to lose its tactical advantage.
Guardrails on further development are needed, and there are many options:
Read the full report here.
Inspecting the claim that AI safety and US primacy are direct trade-offs
Policymakers and engineers should prioritize alignment innovation as AI rapidly develops
The rapid growth of AI creates areas of concern in the field of data privacy, particularly for healthcare data