TL;DR
Britain is investing £1bn in an AI-powered “digital targeting web” to enable faster battlefield decisions. The technology draws on lessons from Ukraine, where cheap drones are revolutionising warfare, and raises pressing questions about autonomous weapons and human judgment in military operations.
AI at the Heart of Military Architecture
In a London demonstration, a map of the Baltic Sea flickered to life. Within seconds, AI-powered software identified a vessel heading towards an undersea telecommunications cable as a potential threat—allowing an operator to immediately dispatch military assets. The scenario was based on real events: in November 2024, two submarine communications cables were damaged within 24 hours by unknown actors.
Britain has earmarked £1bn to build this “digital targeting web”—an AI system designed to fuse data from multiple civilian and military sources into a single command network. Sir Richard Barrons, former British army general and author of the UK’s 2025 Strategic Defence Review, stated: “AI underpins the entire military architecture, it is or will be at the heart of the targeting process.”
The US has already deployed an AI-powered command system called Maven, designed by Palantir and reportedly used during air strikes in Yemen in 2024. NATO has since adopted variants of the software.
From Human Operators to Autonomous Swarms
The war in Ukraine has demonstrated how cheap first-person-view drones can help smaller forces counter larger armies. The next step is autonomy—drones and ground robots that no longer require one-to-one human operators.
The US Marine Corps recently trained with multiple quadcopter drones running software that “applies real-time AI to allow soldiers to deploy swarms that operate as a single force.” Ukraine has approved the semi-autonomous tracked robot “Krampus” for battlefield deployment, while German startup ARX Robotics operates unmanned ground vehicles that can navigate autonomously when communications fail.
The UK is investing roughly £2bn annually in a sixth-generation fighter to work alongside autonomous “loyal wingmen.” Beneath the waves, Australia has purchased Anduril’s autonomous Ghost Shark submarine, while Helsing’s SG-1 Fathom uses AI to classify acoustic signatures of ships and submarines.
Looking Forward
As autonomy spreads, ethical questions deepen. Jessica Dorsey of Utrecht University warns that AI systems may compress “complex moral and legal judgments into algorithmic models.” The challenge of automation bias—trusting systems by default—and action bias—feeling compelled to act because the system demands it—could reshape the nature of command responsibility. How much judgment humans surrender to algorithms may determine both military outcomes and moral accountability.
Source: Financial Times