The Pentagon's AI Gamble: Targeting and the Future of Warfare

As the Pentagon explores AI for military targeting, questions arise about the role of human oversight and the speed of decision-making. This move sparks debate amid scrutiny of recent military actions.
The U.S. military is reportedly considering the use of generative AI systems to prioritize military targets, a revelation that comes as the Pentagon faces scrutiny over a recent strike on an Iranian school. According to a Defense official familiar with the matter, these AI systems could rank targets and suggest priorities, though humans would ultimately vet the recommendations.
A New Era of Targeting
Imagine a scenario where a list of potential targets is fed into a generative AI system, specifically designed for classified military use. The AI would then analyze and rank these targets, factoring in elements like aircraft locations. Companies like OpenAI and xAI are already in discussions to have their models, such as ChatGPT and Grok, deployed in such settings. Yet, the official offered no confirmation on whether these systems are currently active.
On the other hand, reports suggest that Anthropic's Claude has been integrated into existing military AI frameworks, playing a role in operations in countries like Iran and Venezuela. This highlights a broader trend: the military's growing reliance on AI to expedite target identification, despite each model's unique limitations.
The Legacy of Project Maven
Since 2017, the U.S. military's Project Maven has employed AI, particularly computer vision, to sift through enormous data sets from surveillance operations. By algorithmically identifying targets in drone footage, Maven has expedited the approval process for military actions. Soldiers have used this technology to gain insights via a dashboard, color-coding potential threats and allies.
The addition of generative AI introduces a conversational layer, potentially accelerating decision-making even further. But what they're not telling you: while these new systems may speed up processes, they also complicate the verification of outputs, making it easier to access but challenging to trust.
Human Oversight in the AI Era
Despite the promise of faster target prioritization, the recent tragic school strike in Iran, linked to outdated targeting data, casts a shadow over AI's military use. As multiple news outlets report, the incident raises questions about the effectiveness and safety of these technologies. The Pentagon continues to investigate, but the specter of generative AI's role looms large.
In recent months, the Pentagon has embraced AI more broadly, offering non-classified generative AI tools to service members. Yet, only a select few models have been approved for sensitive operations. Anthropic's Claude was first, until recent disputes led to its designation as a supply chain risk, with President Trump demanding a halt to its military use. Meanwhile, OpenAI and xAI have struck agreements to deploy their models in classified settings, albeit with unspecified limitations.
Color me skeptical, but can we truly rely on AI models that demand constant human oversight to make life-and-death decisions? The balance between speed and accountability remains precarious, and the military's AI ambitions are under the microscope. The future of warfare may well depend on how this technological gamble unfolds.
Get AI news in your inbox
Daily digest of what matters in AI.
Key Terms Explained
An AI safety company founded in 2021 by former OpenAI researchers, including Dario and Daniela Amodei.
Anthropic's family of AI assistants, including Claude Haiku, Sonnet, and Opus.
The field of AI focused on enabling machines to interpret and understand visual information from images and video.
AI systems that create new content — text, images, audio, video, or code — rather than just analyzing or classifying existing data.