Pentagon's AI Concerns: Anthropic's Foreign Workforce Under Scrutiny

The Pentagon flags national security risks with Anthropic's use of Chinese workers, raising broader issues about global talent in AI. A court showdown looms.
The Pentagon has raised alarms about Anthropic's reliance on foreign workers, particularly from China, citing potential national security threats. A court filing on March 17 revealed these concerns, highlighting the complexities of global talent in the AI industry.
Security Concerns
Pentagon undersecretary Emil Michael's declaration points to the risks posed by Anthropic's Chinese-born employees. He suggests that these workers could be coerced under China's National Intelligence Law. Yet, the documents show a different story for other U.S. AI companies. The Pentagon trusts their leadership and security measures, questioning why Anthropic is singled out.
The system was deployed without the safeguards the agency promised. Anthropic's proactive measures, such as research compartmentalization and audit trails, were supposed to mitigate these risks. So, what went wrong?
Wider Implications
This isn't just about Anthropic. Foreign-born workers form a significant part of the AI talent pool in the U.S., with Chinese-origin researchers making up nearly 40% of top AI talent by 2023. The affected communities weren't consulted, yet they bear the brunt of these security concerns.
It's ironic that Anthropic, an early adopter of operational security techniques, disrupted a Chinese espionage campaign on their platform last year. They even banned the PRC from their services. The Pentagon's skepticism seems misplaced.
What's at Stake?
The upcoming court hearing on March 24 could determine whether Anthropic gets temporary relief from the Pentagon's supply chain risk designation. But why is the Pentagon still relying on Anthropic's tools if the risks are so severe?
Accountability requires transparency. Here's what they won't release: the specific measures other companies have in place that Anthropic supposedly lacks. The gap between policy and practice needs scrutiny, and this case might just force the issue.
Get AI news in your inbox
Daily digest of what matters in AI.