

The U.S. Department of Defense has escalated its standoff with Anthropic following a disagreement over a defense deal, which previously led the Trump administration to label the AI startup a supply chain risk. Pentagon Chief Technology Officer Emil Michael warns that the company’s Claude AI models could “pollute” the military supply chain due to built-in policy preferences.
On CNBC’s “Squawk Box” on March 12, 2026, Michael said Claude’s “constitution” carries a “different policy preference” that could affect critical defence systems. He added that a phased approach is necessary, noting, “we can’t just rip out Anthropic overnight.”
Emil Michael said, “We can’t have a company that has a different policy preference… pollute the supply chain so our war fighters get ineffective weapons, ineffective body armor, ineffective protection.” He added that the supply chain risk designation is “not meant to be punitive,” as only a small part of Anthropic’s business involves the US government. This comes amid escalating tensions between Anthropic and the Pentagon over the use of Claude AI in defence projects.
The dispute over Anthropic's failed agreement is the first instance of a US company being publicly identified as a supply chain risk, a label typically reserved for foreign entities that pose national security concerns. In March 2026, the Pentagon officially informed Anthropic of this designation, mandating that defense contractors refrain from using Claude in projects related to the Pentagon.
The conflict began when Anthropic refused to remove safeguards in Claude AI that limited its use in fully autonomous weapons and mass surveillance, clashing with the Pentagon’s demand for unrestricted “all lawful use.” Following its designation as a supply chain risk, Anthropic sued the Department of Defense, calling it “unprecedented and unlawful,” while clarifying the restriction applies only to DoD contracts. Reports have also linked Claude AI to Palantir targeting technology used by U.S. forces in Iran.