AI in the Crosshairs: The Military's New Target Ranking Game

The US military is eyeing AI chatbots like ChatGPT for important targeting decisions, sparking concerns over ethics and accountability in warfare.
Welcome to the era where your friendly AI chatbot could decide which military targets to strike first. Yes, that's right. The US Defense Department is considering using generative AI systems to rank targets in warfare. Spare me the roadmap, you can already hear the ethical alarms blaring.
AI Takes Aim
The plan is to feed a list of potential targets into a classified AI system. Humans would then oversee this mechanized omniscience, checking and evaluating the AI's prioritized list. But let's not fool ourselves. No amount of human oversight can completely mitigate the risks of AI deciding life-and-death matters.
Among the contenders for this high-stakes role are OpenAI's ChatGPT and xAI's Grok. Apparently, these digital oracles could soon be offering more than just conversational banter. Naturally, the idea of AI in military decision-making isn't new, but giving it a say in targeting decisions? That's a different beast altogether.
The Pentagon's Tech Tussle
In a different corner of the tech battlefield, the Pentagon's Chief Technology Officer claims that Claude, another generative AI, would "pollute" the defense supply chain. The official blames a policy preference embedded in the model, underscoring a broader conflict about AI's role in military affairs. Meanwhile, Anthropic, the company behind Claude, is reeling from OpenAI's compromise with the Department of Defense. The optics couldn't be worse.
As if that weren't enough, Meta has postponed its latest AI launch due to performance issues, lagging behind rivals like Google and OpenAI. The former AI chief at Meta isn't betting on large language models either. I've seen enough to know that the AI race is as much about politics as it's about technology.
The Ethics of Automated War
Back to targeting decisions. The whole endeavor raises ethical concerns that seem to have been swept under the rug. How do we ensure accountability when algorithms make split-second decisions? And what happens when the data fed into these systems is flawed or biased? An AI misfire could escalate conflicts with dire consequences.
Could it also be the beginning of an accountability crisis in warfare? When things go wrong, who takes the blame? The machine or the human who signed off on its recommendations? These are questions that need answering before chatbots start playing war games.
The press release said innovation. The 10-K said losses. In this case, the real loss could be our ethical standards in the theater of war. If AI becomes a utility like electricity, as OpenAI's CEO Sam Altman suggests, then perhaps we're inching dangerously close to a world where intelligence, not just energy, is commodified.
Get AI news in your inbox
Daily digest of what matters in AI.
Key Terms Explained
An AI safety company founded in 2021 by former OpenAI researchers, including Dario and Daniela Amodei.
An AI system designed to have conversations with humans through text or voice.
Anthropic's family of AI assistants, including Claude Haiku, Sonnet, and Opus.
AI systems that create new content — text, images, audio, video, or code — rather than just analyzing or classifying existing data.