Autonomous Weapons
Autonomous weapons—formally known as Lethal Autonomous Weapons Systems (LAWS)—are weapons that use artificial intelligence to independently select and engage targets without requiring direct human intervention. They represent one of the most consequential and contested frontiers of AI deployment, sitting at the intersection of military strategy, international law, AI ethics, and geopolitical competition.
The spectrum of autonomy in weapons systems is broad. At one end are existing systems like Israel's Iron Dome, which autonomously intercepts incoming missiles—a defensive application where the speed of engagement makes human decision-making impractical. At the other end are fully autonomous offensive systems capable of identifying, selecting, and killing human targets without a human in the loop. Between these poles lie "human-on-the-loop" systems where a human can override but doesn't approve each individual engagement, and "human-in-the-loop" systems where humans authorize each use of lethal force.
The technology enabling autonomous weapons draws from computer vision for target identification, SLAM for navigation in contested environments, swarm intelligence for coordinated multi-drone operations, and reinforcement learning for tactical decision-making. Autonomous drones have already been used in combat—Turkey's Kargu-2 reportedly engaged targets in Libya in 2020 without explicit operator authorization, and drone warfare in Ukraine has accelerated the development of AI-guided munitions by both sides.
International governance of autonomous weapons has moved slowly relative to the technology. In November 2025, 156 states supported a UN General Assembly resolution on autonomous weapons systems, and the Group of Governmental Experts (GGE) on LAWS is aiming for consensus by 2026 on a draft for a possible new legal instrument. The International Committee of the Red Cross (ICRC) has called for new legally binding rules ensuring meaningful human control over the use of force. But major military powers—particularly the US, China, Russia, and Israel—have resisted binding restrictions, arguing that autonomous systems can be more precise and reduce civilian casualties compared to human operators under stress.
The "Stop Killer Robots" campaign frames the issue as a moral bright line: machines should never make life-and-death decisions about humans. The counterargument, made by military AI advocates, is that autonomous weapons can reduce errors of judgment, operate with consistent rules of engagement, and make targeting decisions without the fear, fatigue, and anger that lead to war crimes. Both sides invoke the same ethical frameworks—just with radically different conclusions about whether AI judgment can meet the threshold of meaningful human control required by international humanitarian law.
The geopolitical dimension is perhaps the most decisive factor. Both the US and China are investing heavily in autonomous weapons capabilities, creating a dynamic where unilateral restraint is seen as strategic disadvantage. The US Department of Defense's Replicator initiative aims to field thousands of autonomous systems, while China's military-civil fusion strategy blurs the line between commercial AI research and weapons development. This arms race dynamic—where each side's investment justifies the other's—makes international governance agreements both more urgent and more difficult to achieve.