Futurism logo

‘Incoherent’: Hegseth’s Anthropic Ultimatum Confounds AI Policymakers

Legal experts and AI policy leaders warn that the Pentagon’s dual threat against Anthropic could undermine trust between Silicon Valley and the U.S. government.

By Behind the TechPublished about 8 hours ago 4 min read

Defense Secretary Pete Hegseth has triggered alarm across the AI policy and legal communities after issuing an ultimatum to Anthropic, demanding the company lift key usage restrictions on its Claude AI model or face severe government retaliation.

At the center of the dispute is Anthropic’s refusal to remove what it calls ethical “red lines” — specifically prohibitions against using its AI for mass surveillance of U.S. citizens or for fully autonomous weapons systems. According to multiple reports, Hegseth gave Anthropic CEO Dario Amodei a deadline: provide the military with unrestricted access to Claude for “all lawful purposes” by Friday evening or risk being designated a “supply chain risk.”

What Is the Pentagon Threatening?

The Pentagon’s approach involves two separate — and, critics argue, contradictory — actions:

Label Anthropic a “supply chain risk.”

This designation is typically reserved for companies tied to adversarial nations, such as Huawei in China. If applied to Anthropic, it could bar government contractors from using Claude in defense-related work.

Invoke the Defense Production Act (DPA).

The Cold War-era statute allows the federal government to compel private companies to prioritize national defense needs. During the Covid-19 pandemic, it was used to accelerate vaccine and medical supply production.

Critics say attempting both simultaneously makes little sense.

Dean Ball, a former AI adviser involved in drafting the White House’s AI Action Plan, described the strategy as “incoherent.” In his view, forcing the Defense Department to rely on Anthropic’s model while simultaneously warning other defense contractors not to use it creates a logical contradiction.

Legal experts echo that concern. Katie Sweeten, a former Justice Department official who served as liaison to the Pentagon, questioned how the government could both compel cooperation and classify the same company as a national security risk.

What Triggered the Confrontation?

The standoff reportedly escalated after a January military operation to capture Venezuelan leader Nicolás Maduro, during which Claude was used via Anthropic’s corporate partner Palantir. Anthropic later sought clarification about how its AI had been deployed — a move that reportedly irritated Pentagon officials.

Anthropic, along with OpenAI, Google, and xAI, signed a $200 million Pentagon contract last summer. However, tensions have grown as the Department of Defense insists AI systems must be usable for “all lawful military applications,” without ideological constraints.

The Pentagon maintains that legality is the military’s responsibility, not the AI vendor’s.

What Is News

The Pentagon is reviewing its relationship with Anthropic.

Defense officials have reportedly asked major contractors to assess their reliance on Claude as part of a potential “supply chain risk” designation.

Hegseth has threatened to invoke the Defense Production Act if Anthropic does not comply.

Anthropic has reiterated it will not lift restrictions on AI-powered mass surveillance of Americans or fully autonomous weapons.

Other AI firms, including OpenAI, Google, and xAI, are reportedly in discussions about expanding classified-system access.

What Is Analysis

The Pentagon’s strategy signals a significant shift in how the U.S. government may handle disagreements with domestic AI firms. Historically, supply chain risk designations have targeted foreign adversaries. Applying that label to a U.S.-based startup could redefine the boundary between national security authority and private-sector autonomy.

The contradiction highlighted by legal scholars matters beyond semantics. If the Defense Department compels a company’s technology under the DPA, it implicitly acknowledges the tool’s strategic importance. Simultaneously labeling it a security threat could undermine confidence among contractors, investors, and corporate partners.

More broadly, this conflict may chill innovation partnerships. Silicon Valley firms have increasingly entered national security work, especially in AI, cybersecurity, and cloud infrastructure. A precedent where policy disagreements lead to punitive designations could discourage companies from engaging deeply with defense projects — or encourage them to avoid public commitments to ethical constraints.

On the political front, the issue is drawing bipartisan scrutiny. Lawmakers across ideological lines have questioned expanding executive power under the Defense Production Act. The statute’s use for AI policy — rather than wartime manufacturing or emergency medical production — could invite legal challenges and congressional intervention.

Finally, the clash underscores a deeper philosophical divide: who defines “lawful use” in an era when AI systems can scale surveillance and automate lethal force? Anthropic argues that legal permissibility is not the same as ethical acceptability. The Pentagon argues that operational decisions belong to the military, not technology vendors.

That unresolved tension may define the next phase of U.S. AI governance.

What Comes Next?

With the Friday deadline approaching, several scenarios are possible:

Anthropic could compromise partially, adjusting but not eliminating safeguards.

The Pentagon could follow through with either the DPA or the supply chain designation.

The company could challenge government action in court.

Or the confrontation could de-escalate quietly through negotiated terms.

Regardless of the outcome, the episode marks a pivotal moment. It raises questions not just about AI guardrails, but about how much leverage the federal government should exert over private companies shaping frontier technologies.

The broader AI race — domestically and globally — continues. But the fight between Anthropic and the Pentagon reveals that the most contentious battleground may not be foreign competition, but the domestic struggle over how powerful AI should be used — and who ultimately decides.

artificial intelligencetech

About the Creator

Behind the Tech

Reader insights

Be the first to share your insights about this piece.

How does it work?

Add your insights

Comments (1)

Sign in to comment
  • Jeffabout 8 hours ago

    Wow~

Find us on social media

Miscellaneous links

  • Explore
  • Contact
  • Privacy Policy
  • Terms of Use
  • Support

© 2026 Creatd, Inc. All Rights Reserved.