The U.S. Department of Defense (the Pentagon) officially notified Anthropic on Wednesday, March 4, 2026, of its inclusion on the 'supply chain risk' list. This decision, reported by media on Thursday, effectively prohibits defense contractors from using the Claude AI in products for the government. The basis for this unprecedented step against an American company was Anthropic's refusal to change its Acceptable Use Policy at the military's request.
The conflict had been brewing for several weeks. The Pentagon, specifically Defense Secretary Pete Hegset, demanded that Anthropic relax its internal restrictions to allow the use of Claude for two key purposes: creating autonomous lethal weapons (killer robots) without direct human control and conducting mass surveillance. Anthropic, whose corporate constitution and safety policy are its key marketing differentiators, insisted on maintaining these 'red lines.' Negotiations reached an impasse, and after Anthropic's public refusal to make concessions last Thursday, the Pentagon carried out its threat.
Technically, the 'supply chain risk' status is typically applied to foreign companies linked to unfriendly states. In Anthropic's case, the Pentagon is for the first time publicly using this tool against a domestic developer. In his statement, Secretary Hegset went even further, stating that contracts would be terminated with any contractors engaged in 'any commercial activity' with Anthropic, even outside the scope of defense projects. The Claude AI itself, according to reports, is already deeply integrated into some systems, such as intelligence analysis tools reportedly instrumental in a recent successful U.S. operation.
Anthropic CEO Dario Amodei, in a response statement on the corporate blog, confirmed receipt of the notification and called the Pentagon's actions legally questionable. 'We see no choice but to challenge this in court,' he wrote. The company is prepared to defend a private company's right to set ethical limits on the use of its technologies. The Pentagon, for its part, insists that such strict restrictions from a commercial entity are unacceptable for national security and call into question state sovereignty.
This precedent has enormous significance for the entire AI industry. It raises a fundamental question about the balance of power between the state and technology companies on matters of ethics and security. If the Pentagon's position prevails, other AI developers, including major players, could face similar pressure to provide the military and intelligence agencies with unrestricted access to their models. For users and businesses, this means the risk of further fragmentation of the AI market into 'military' and 'civilian' models, as well as a potential slowdown in innovation due to legal uncertainty.
The conflict's prospects now depend on the courts. Anthropic is likely to challenge the legality of applying the 'supply chain risk' status to it and Secretary Hegset's overly broad interpretation of this rule. Concurrently, the Pentagon and the Trump administration have set a six-month deadline for removing Claude from government systems, which will be a technically challenging task. The outcome of this confrontation will define new rules of the game at the intersection of advanced technology, government regulation, and corporate ethics for years to come.
No comments yet. Be the first!