Pentagon CTO Warns Anthropic’s Claude AI Could ‘Pollute’ US Military Supply Chain

Pentagon CTO warns Claude AI’s built-in policies could compromise military supply chains, while Anthropic disputes designation and pursues legal action
Pentagon CTO Warns Anthropic’s Claude AI Could ‘Pollute’ US Military Supply Chain
Pentagon CTO Warns Anthropic’s Claude AI Could ‘Pollute’ US Military Supply ChainThe Bridge Chronicle
Published on

The U.S. Department of Defense has escalated its standoff with Anthropic following a disagreement over a defense deal, which previously led the Trump administration to label the AI startup a supply chain risk. Pentagon Chief Technology Officer Emil Michael warns that the company’s Claude AI models could “pollute” the military supply chain due to built-in policy preferences.

Join our WhatsApp Channel to Stay Updated!

On CNBC’s “Squawk Box” on March 12, 2026, Michael said Claude’s “constitution” carries a “different policy preference” that could affect critical defence systems. He added that a phased approach is necessary, noting, “we can’t just rip out Anthropic overnight.”

Pentagon CTO Warns Anthropic’s Claude AI Could ‘Pollute’ US Military Supply Chain
Anthropic, led by Dario Amodei, takes Pentagon to court; OpenAI CEO Sam Altman responds

Emil Michael said, “We can’t have a company that has a different policy preference… pollute the supply chain so our war fighters get ineffective weapons, ineffective body armor, ineffective protection.” He added that the supply chain risk designation is “not meant to be punitive,” as only a small part of Anthropic’s business involves the US government. This comes amid escalating tensions between Anthropic and the Pentagon over the use of Claude AI in defence projects.

The dispute over Anthropic's failed agreement is the first instance of a US company being publicly identified as a supply chain risk, a label typically reserved for foreign entities that pose national security concerns. In March 2026, the Pentagon officially informed Anthropic of this designation, mandating that defense contractors refrain from using Claude in projects related to the Pentagon.

Pentagon CTO Warns Anthropic’s Claude AI Could ‘Pollute’ US Military Supply Chain
Google and OpenAI Engineers Side with Anthropic in Pentagon AI Ethics Dispute

What is the Anthropic-Pentagon Clash?

The conflict began when Anthropic refused to remove safeguards in Claude AI that limited its use in fully autonomous weapons and mass surveillance, clashing with the Pentagon’s demand for unrestricted “all lawful use.” Following its designation as a supply chain risk, Anthropic sued the Department of Defense, calling it “unprecedented and unlawful,” while clarifying the restriction applies only to DoD contracts. Reports have also linked Claude AI to Palantir targeting technology used by U.S. forces in Iran.

Related Stories

No stories found.
logo
The Bridge Chronicle
www.thebridgechronicle.com