Pentagon Blacklists Anthropic's Claude AI in Trump Administration Standoff
The Pentagon has designated Anthropic as a "supply chain risk," effectively blacklisting the AI company's Claude chatbot from government use after Anthropic refused to change its safety policies.
Pentagon Blacklists Anthropic's Claude AI in Trump Administration Standoff
The tech industry is reeling after Defense Secretary Pete Hegseth officially designated AI company Anthropic as a "supply chain risk" this week, effectively banning its Claude chatbot from all government contracts.
The move comes after months of tension between the Trump administration and Anthropic, which has refused to loosen its AI safety policies despite pressure from the Pentagon.
What Sparked the Fight
According to a scathing 1,600-word memo sent to employees by Anthropic CEO Dario Amodei on Friday, the company's relationship with the government soured because "we haven't donated to Trump" and "we haven't given dictator-style praise to Trump."
Amodei suggested that unlike OpenAI and its executives, Anthropic has maintained independence from political pressure. The CEO's memo painted the Pentagon's actions as retaliation for the company's unwillingness to pander to the current administration.
Defense Contractors Jump Ship
The designation has immediate real-world consequences. Defense contractors who do business with the US military are already pivoting away from Claude, according to CNBC reports. Companies say they're abandoning the AI tool "out of an abundance of caution" rather than risk losing lucrative government contracts.
While Anthropic can still challenge the designation in court, the damage to its government business appears swift and severe.
For the broader AI industry, this fight is a canary in the coal mine. If the government can effectively blacklist companies for refusing to bend to political will, what does that mean for AI development and innovation in America?
Get AI news in your inbox
Daily digest of what matters in AI.
Key Terms Explained
The broad field studying how to build AI systems that are safe, reliable, and beneficial.
An AI safety company founded in 2021 by former OpenAI researchers, including Dario and Daniela Amodei.
An AI system designed to have conversations with humans through text or voice.
Anthropic's family of AI assistants, including Claude Haiku, Sonnet, and Opus.