Trump Orders Government-Wide Ban on Anthropic After Pentagon Standoff Over AI Safeguards

The president branded the company "leftwing nut jobs" and gave agencies six months to phase out its technology, as the Defense Secretary designated Anthropic a supply chain risk to national security.

The Trump administration on Friday ordered every federal agency to cease using artificial intelligence technology made by Anthropic, escalating an extraordinary public confrontation between the White House and one of the world's most advanced AI companies over the limits of military AI.

President Trump announced the directive on Truth Social, calling Anthropic a "Radical Left AI company run by people who have no idea what the real World is all about." He added: "We don't need it, we don't want it, and will not do business with them again!"

Within ninety minutes of Trump's post, Defense Secretary Pete Hegseth followed through on weeks of threats by formally designating Anthropic a supply chain risk to national security — a label previously reserved for foreign adversaries and never before applied to an American company. Hegseth barred any contractor, supplier, or partner doing business with the US military from conducting commercial activity with Anthropic, effective immediately.

The Core Dispute

The clash centres on two narrow safeguards that Anthropic CEO Dario Amodei insisted must remain in the company's Pentagon contract: a prohibition on mass domestic surveillance of Americans, and a prohibition on fully autonomous weapons systems that remove humans entirely from targeting decisions.

In a detailed public statement published Thursday — roughly 24 hours before the deadline Hegseth had set — Amodei said the company "cannot in good conscience accede" to the Pentagon's demand for unrestricted access. He argued that current AI systems are "simply not reliable enough to power fully autonomous weapons" and that AI-driven mass surveillance "presents serious, novel risks to our fundamental liberties."

Anthropic said that in recent negotiations, contract language "framed as compromise was paired with legalese that would allow those safeguards to be disregarded at will" — a characterisation the Pentagon has not publicly disputed.

The Pentagon countered that it was seeking to use Claude, Anthropic's AI model, only "for all lawful purposes," and that a contractor presuming to set policy constraints on a government client could not be relied upon as a military partner. Officials had threatened to invoke the Defense Production Act to compel removal of the safeguards — a threat Amodei called "inherently contradictory," noting one action labels the company a security risk while the other treats its product as essential to national security.

Six-Month Window and Criminal Threats

While Trump ordered "immediate" cessation across most agencies, he carved out a six-month transition period for the Pentagon and intelligence community, where Claude is already embedded in classified networks through a partnership with Palantir, under a $200 million contract awarded in July 2025. The phased timeline is significant: Anthropic is currently the only AI company with its model deployed on the Pentagon's classified systems. The abrupt removal of Claude from active intelligence analysis and operational planning workflows could, as the New York Times reported, "vastly complicate intelligence analysis and defense work."

Trump warned of "major civil and criminal consequences" if Anthropic fails to cooperate during the transition. "Anthropic better get their act together, and be helpful during this phase out period, or I will use the Full Power of the Presidency to make them comply," he wrote.

Silicon Valley Split

The confrontation has divided the technology industry. Elon Musk backed the administration, writing on X that "Anthropic hates Western Civilization." Musk has a direct financial stake in the outcome: his AI venture, xAI, which makes the chatbot Grok, is a leading candidate to replace Anthropic, and the Pentagon reportedly plans to give Grok access to classified military networks.

OpenAI CEO Sam Altman struck a more conciliatory tone, voicing sympathy for Anthropic's safety concerns and saying he "largely trusted" the company's intentions. Employees at Google and OpenAI circulated open letters supporting Amodei's position.

Senator Mark Warner, the top Democrat on the Senate Intelligence Committee, said the punitive action "combined with inflammatory rhetoric attacking that company, raises serious concerns about whether national security decisions are being driven by careful analysis or political considerations."

What Anthropic Gave Up — and Refused To

Amodei's statement underscored that Anthropic has been among the most cooperative AI companies in defence work. It was the first frontier AI firm to deploy models on classified government networks, the first at the National Laboratories, and the first to provide custom models for national security customers. It voluntarily forfeited hundreds of millions of dollars in revenue by cutting off Chinese military-linked companies from using Claude, and shut down CCP-sponsored cyberattacks that attempted to abuse its technology.

The two red lines — mass surveillance and fully autonomous weapons — were never part of existing contracts and had not, according to Anthropic, impeded military adoption to date. The company offered to collaborate on R&D to improve AI reliability for autonomous systems, but said the Pentagon did not accept the offer.

Counter-View: National Security Prerogative

Supporters of the administration's position argue that private companies should not dictate how the government employs lawfully procured technology, particularly in wartime applications. Pentagon spokesman Sean Parnell framed the dispute as straightforward: the military needs tools it can use without external veto. Hegseth accused Anthropic of "a textbook case of how not to do business with the United States Government," calling the company's stance "a cowardly act of corporate virtue-signaling that places Silicon Valley ideology above American lives."

There is also a practical argument: if the government accepts one company's ethical vetoes, it creates a precedent that any vendor could impose ideological conditions on defence procurement, potentially hobbling military readiness.

Sceptics of Anthropic's position also note a commercial dimension: by drawing a public line on safety, the company burnishes its brand among enterprise and consumer customers who value responsible AI — a reputational asset that could be worth far more than the Pentagon contract itself.

Broader Implications

The ban is likely to reshape the AI-defence marketplace. It could serve as a warning to Google and OpenAI, both of which hold military contracts with their own safety frameworks. It benefits Musk's xAI and potentially accelerates the government's reliance on companies willing to operate without usage restrictions.

The six-month window leaves open the possibility of a negotiated resolution, and the New York Times reported that behind-the-scenes talks were still underway when Trump's post blindsided Anthropic officials. But the supply chain risk designation — which could cascade through Anthropic's commercial partnerships — represents a qualitatively different level of coercion, one designed to impose costs far beyond the loss of a single government contract.

Anthropic did not immediately comment on Friday's developments.

---

Sources: Associated Press, CBS News, New York Times, Times of India, Financial Times, Anthropic official statement (anthropic.com)

---

⚠️ AI-Generated Content Notice

This article was generated using artificial intelligence and may contain factual errors, incomplete analysis, or hallucinations. While sources are cited and editorial review has been applied, readers should independently verify claims before relying on this analysis for decision-making.

---

Draft prepared for Numnet News — 27 February 2026

Read more