Anthropic sues Pentagon over ‘supply chain risk’ designation, citing free speech and due process violations — company refused to allow its AI to be used for aut

Anthropic sues Pentagon over 'supply chain risk' designation, citing free speech and due process violations — company refused to allow its AI to be used for aut

The lawsuits, the first filed in the U.S. District Court for the Northern District of California and the second in the U.S. Court of Appeals for the D.C. Circuit, allege the Trump administration violated Anthropic's First Amendment and due process rights, according to Reuters . Anthropic is asking courts to vacate the designation, block its enforcement, and require federal agencies to withdraw directives to drop the company's tools. The company said the actions could jeopardize "hundreds of millions of dollars" in revenue in the near-term.

This dispute traces back to a contract renegotiation between Anthropic and the Department of War that collapsed in late February. The Pentagon wanted unrestricted access to Claude for "any lawful use," while Anthropic refused to remove two guardrails: a prohibition on using its models for fully autonomous weapons without human oversight, and a prohibition on mass domestic surveillance of U.S. citizens.

You may like Anthropic refuses to lower AI guardrails for The Pentagon OpenAI strikes deal with Pentagon following Claude blacklisting Trump bans Anthropic AI from federal agencies after firm refuses to unlock capabilities Defense Secretary Pete Hegseth formally issued the supply chain risk designation on February 27; Anthropic was officially notified on March 3. President Trump separately directed all federal agencies to stop using Anthropic's technology via a Truth Social post that same day, with a six-month phase-out period.

The Pentagon argued that private companies cannot dictate how the government uses technology in national security scenarios, and that Anthropic's restrictions could endanger American lives. Anthropic countered that current AI models are not reliable enough for fully autonomous weapons deployment, and that domestic surveillance at scale would violate fundamental rights.

The fallout has had immediate competitive consequences, with OpenAI’s Sam Altman controversially striking a new Pentagon deal within hours of Anthropic's new designation. Altman publicly stated that the Dept. of Warshares OpenAI's principles on human oversight of weapons and opposition to mass surveillance. xAI, Elon Musk's AI company, is also understood to have since been cleared for use on classified Pentagon systems.

Anthropic was previously the first AI lab permitted to operate on the Dept. of War's classified networks, and signed a contract worth up to $200 million with the department in July 2025. The Wall Street Journal has previously reported that Claude had been used in military operations, including intelligence assessments and target identification in the U.S.'s ongoing conflict with Iran, even after the Pentagon ousted the model.

Get Tom's Hardware's best news and in-depth reviews, straight to your inbox.

Key considerations

  • Investor positioning can change fast
  • Volatility remains possible near catalysts
  • Macro rates and liquidity can dominate flows

Reference reading

More on this site

Informational only. No financial advice. Do your own research.

Leave a Comment