US military uses Anthropic's Claude for AI-driven strike planning in Iran war

In the war against Iran, the US military is using generative AI at scale for target selection and strike planning for the first time. Of all models, it's the one from the company Washington just banned. The article US military uses Anthropic's Claude for AI-driven strike planning in Iran war appeared first on The Decoder.

In the war against Iran, the US military is using generative AI at scale for target selection and strike planning for the first time. Of all models, it's the one from the company Washington just banned.
The article US military uses Anthropic's Claude for AI-driven strike planning in Iran war appeared first on The Decoder.
This article was originally published by The Decoder. View original article
Get AI news in your inbox
Daily digest of what matters in AI.
Key Terms Explained
An AI safety company founded in 2021 by former OpenAI researchers, including Dario and Daniela Amodei.
Anthropic's family of AI assistants, including Claude Haiku, Sonnet, and Opus.
The part of a neural network that generates output from an internal representation.
AI systems that create new content — text, images, audio, video, or code — rather than just analyzing or classifying existing data.