Anthropic Rejects Pentagon Ultimatum Over Military AI Use
CEO Dario Amodei says company “cannot in good conscience” allow unrestricted use of Claude as U.S. defense officials threaten emergency powers.

Artificial intelligence company Anthropic has publicly rejected demands from the U.S. Department of Defense to allow unrestricted military use of its AI systems, escalating a rare and highly visible standoff between a leading AI developer and the Pentagon.
CEO Dario Amodei said Thursday that the company “cannot in good conscience accede” to new contract language proposed by Defense Secretary Pete Hegseth, which he said failed to prevent potential use of Anthropic’s AI model Claude for mass surveillance or fully autonomous weapons.
The Pentagon has denied those intentions but warned that Anthropic could face serious consequences if it does not comply by Friday.
What Is News
The Pentagon demanded unrestricted military use of Anthropic’s AI systems.
Defense Secretary Hegseth reportedly gave the company a Friday deadline.
Anthropic rejected the revised contract terms, citing ethical concerns.
The Defense Department suggested it could invoke the Defense Production Act or designate Anthropic a supply chain risk.
Lawmakers from both parties criticized the public escalation.
The Core Dispute
Anthropic said new contract language made “virtually no progress” in preventing use of its AI for:
Mass surveillance of Americans
Fully autonomous weapons operating without human involvement
Anthropic’s internal policies prohibit such uses.
Pentagon spokesperson Sean Parnell responded that the military has “no interest” in conducting illegal surveillance or developing autonomous weapons without human oversight. He said the department simply wants to use Anthropic’s AI “for all lawful purposes.”
The Defense Department already maintains AI-related contracts with Google, OpenAI, and Elon Musk’s xAI. Anthropic is reportedly the last major AI provider that has not agreed to supply its technology for a new internal military network under the Pentagon’s preferred terms.
Threat of Emergency Powers
The confrontation escalated when Hegseth warned that if Anthropic refuses to comply, the Pentagon could:
Terminate its government contracts
Label the company a supply chain risk
Invoke the Defense Production Act, a Cold War-era law that allows the federal government to compel companies to prioritize national defense production
Amodei criticized the contradiction in those threats.
“One labels us a security risk; the other labels Claude as essential to national security,” he said.
Anthropic said it is not walking away from negotiations but is prepared to help transition to another provider if necessary.
What Is Analysis
This clash reflects a deeper shift in how artificial intelligence is being positioned: not merely as a commercial tool but as strategic national infrastructure.
Unlike previous defense technology partnerships — where firms largely operated behind closed doors — this dispute has unfolded in public. That visibility matters.
Anthropic’s stance highlights a growing divide inside the AI industry. Some firms have embraced military contracts as part of national competitiveness. Others remain wary of enabling autonomous weapon systems or large-scale surveillance architectures.
From the Pentagon’s perspective, AI is now mission-critical. Defense officials argue that access to the most advanced models is necessary to maintain operational superiority in an era where adversaries are investing heavily in AI.
From Anthropic’s perspective, unrestricted contractual language could undermine both ethical guardrails and public trust. For an AI company positioning itself around safety and governance, reputational damage could outweigh short-term government revenue.
The most consequential issue may not be this single contract but the precedent it sets.
If the Defense Production Act were invoked to compel AI deployment, it would mark an extraordinary expansion of federal authority into frontier software technologies. That could deter private-sector innovation or push AI firms to restructure governance models to resist future pressure.
At the same time, policymakers face a difficult balance. If leading U.S. AI firms decline defense cooperation while foreign rivals integrate AI into military systems without similar constraints, national security leaders may view hesitation as strategic vulnerability.
Political Reaction
Sen. Thom Tillis criticized the public nature of the dispute, saying sensitive negotiations with strategic vendors should occur behind closed doors.
Sen. Mark Warner warned that the confrontation signals the Pentagon may be sidestepping AI governance safeguards and called for stronger congressional oversight in national security contexts.
The episode also unfolds amid reported cultural shifts within the Defense Department, where leadership has emphasized operational flexibility and reduced internal legal barriers.
Bottom Line
Anthropic’s refusal to grant unrestricted military use of its AI marks one of the clearest public fractures yet between Silicon Valley ethics frameworks and U.S. defense priorities.
At stake is not just one contract — but the broader question of how artificial intelligence will be governed when commercial innovation intersects with military power.
As AI becomes embedded in national security systems, the tension between safety, sovereignty, and strategic advantage is no longer theoretical.
It is playing out in real time.



Comments
There are no comments for this story
Be the first to respond and start the conversation.