When the Pentagon Breaks Its Own Deal: The Anthropic AI Showdown Explained

Last updated: March 2, 2026

In late February 2026, the relationship between AI company Anthropic and the U.S. Department of Defense collapsed in dramatic fashion, resulting in President Trump banning all federal agencies from using Anthropic's products and the Pentagon designating the company a "supply chain risk" — a label normally reserved for foreign adversaries. It was the first time this designation has ever been applied to an American company.

Here's how it happened and why it matters.

The Contract

In July 2025, the Department of Defense awarded Anthropic a two-year contract worth up to $200 million to prototype frontier AI capabilities for national security. Anthropic's Claude model, deployed through a partnership with Palantir, was the first and only commercial AI model approved for use on the Pentagon's classified networks.

The Dispute

Anthropic insisted on two "red lines" in its contract with the Pentagon: Claude could not be used for mass domestic surveillance of American citizens or to power fully autonomous weapons systems. The Pentagon's position was that AI models must be available for "all lawful purposes" without restrictions imposed by a private company. Defense Secretary Pete Hegseth called Anthropic's stance "fundamentally incompatible with American principles" and accused the company of trying to "seize veto power over the operational decisions of the United States military."

Tensions escalated after reports surfaced that Anthropic's technology had been used in the operation to capture Venezuelan President Nicolás Maduro in January 2026, prompting Anthropic to raise concerns with Palantir about how Claude was being used.

The Ultimatum

On February 24, Hegseth gave Anthropic CEO Dario Amodei a deadline: agree to unrestricted lawful use of Claude by 5:01 PM on Friday, February 27, or face consequences. On Thursday the 26th, Amodei released a public statement refusing to back down, writing that the company could not "in good conscience accede to their request."

The Fallout

On Friday, February 27, Trump posted on Truth Social calling Anthropic "Leftwing nut jobs" and directed every federal agency to immediately cease use of Anthropic's technology, with a six-month phaseout period. Hegseth then designated Anthropic a supply chain risk, meaning any contractor, supplier, or partner doing business with the U.S. military is barred from conducting any commercial activity with Anthropic.

This is potentially far more damaging than losing the $200 million contract itself, as it could force many of Anthropic's enterprise customers — particularly those with government contracts — to cut ties with the company. Anthropic announced it would challenge the designation in court.

OpenAI Steps In

The situation took another turn when OpenAI moved quickly to fill the vacuum. On Thursday, CEO Sam Altman sent a memo to staff saying OpenAI shared Anthropic's same red lines and wanted to "help de-escalate things." Over 60 OpenAI employees and 300 Google employees had signed an open letter supporting Anthropic's position. But just hours after Trump's ban on Friday evening, OpenAI announced it had reached its own deal with the Pentagon for classified network deployment.

Altman himself admitted the deal was "definitely rushed" and "the optics don't look good." The backlash was significant — Anthropic's Claude overtook ChatGPT in the Apple App Store that weekend.

OpenAI claims its deal actually includes stronger safeguards than Anthropic's original contract: the same two red lines plus a third prohibiting "high-stakes automated decisions" like social credit systems. The key difference is that OpenAI structured these as technical safeguards (cloud-only deployment, OpenAI personnel in the loop, their own safety stack) rather than explicit contractual language restricting the Pentagon. Critics, including Techdirt's Mike Masnick, have argued the deal's reliance on existing law (like Executive Order 12333) still allows for domestic surveillance.

OpenAI also said it asked the Pentagon to offer the same terms to all AI labs and to resolve things with Anthropic, and publicly stated Anthropic should not be labeled a supply chain risk.

The Military-Tech Revolving Door

Adding another layer to this story: in June 2025, the U.S. Army created "Detachment 201: Executive Innovation Corps" and directly commissioned four tech executives as lieutenant colonels in the Army Reserve — a rank that normally takes over a decade of military service to earn. The four were:

  • Kevin Weil — OpenAI's Chief Product Officer
  • Andrew Bosworth — Meta's CTO
  • Shyam Sankar — Palantir's CTO
  • Bob McGrew — Former OpenAI Chief Research Officer

Military.com reported that none of these executives would recuse themselves from their companies' defense business dealings. OpenAI announced its $200 million defense contract within days of the commissioning ceremony. When OpenAI then swooped in to replace Anthropic months later, its own CPO was already a commissioned Army officer.

The Safety Researcher Who Quit

On February 9 — weeks before the dispute went public — Mrinank Sharma, who led Anthropic's Safeguards Research Team, resigned. In his public farewell letter, he wrote that "the world is in peril" and that the safety team "constantly faces pressures to set aside what matters most." He described seeing "how hard it is to truly let our values govern our actions" within institutions shaped by competition, speed, and scale.

While his letter didn't name the Pentagon dispute specifically, the timing and his position at the center of Anthropic's safety work make it hard to view his departure as unrelated to the mounting government pressure.

Why It Matters

It is critical to understand that the Pentagon originally signed the contract with Anthropic that included these usage restrictions. The government agreed to Anthropic's terms in July 2025, and Anthropic's Claude operated on classified networks under those terms for months. The Pentagon is now retroactively demanding the removal of conditions it previously accepted — and punishing Anthropic for refusing to change an agreement both parties had already entered into.

This is not a case of Anthropic springing new restrictions on the military mid-contract; it is the government reversing its own position and retaliating when a private company held firm.

This sets a potentially precedent-defining test for several major questions:

  • Whether the government can force a private company to abandon contractual terms it already agreed to
  • Whether private companies can impose ethical limits on how the government uses their technology without facing punishment
  • Whether "supply chain risk" designations can be weaponized as a coercive tool against domestic firms that don't comply with shifting government demands

For any company doing business with both the federal government and AI providers, the ripple effects of this dispute could reshape vendor risk calculations for years to come.


Update — March 2, 2026

Several significant developments have unfolded since the original post.

Claude Surges to #1 in the App Store

In a remarkable show of public support, Anthropic’s Claude app climbed to the number one spot on Apple’s App Store Top Free Apps chart, overtaking both ChatGPT and Google Gemini. The surge in downloads came as a direct response to the Trump ban and Anthropic’s refusal to back down. Even OpenAI CEO Sam Altman called Anthropic’s supply chain risk designation “a very bad decision” and “an extremely scary precedent,” saying he hopes it gets reversed. (Engadget)

Pentagon Used Claude in Iran Strikes — Hours After the Ban

In perhaps the most striking irony of the entire dispute, reports indicate that U.S. Central Command systems used Anthropic’s Claude to conduct intelligence assessments, identify targets, and simulate battlefield scenarios in real time during airstrikes on Iran — just hours after Trump signed the order banning its use. Experts have noted that separating the military from Claude would amount to “open-heart surgery,” as the model is deeply integrated into systems operated by Palantir across U.S. security operations. The transition away from Claude is expected to take at least six months, which explains why the administration continues to rely on the very tool it has publicly condemned. (Ynet News)

Legal Analysis: Hegseth May Be Overstepping His Authority

A detailed legal analysis published by Just Security (an independent law and policy journal at NYU School of Law) argues that Hegseth’s “supply chain risk” designation likely exceeds his statutory authority. The relevant law, 10 U.S.C. § 3252, defines a supply chain risk as the risk that an adversary may sabotage or subvert military systems — a definition that appears difficult to apply to a domestic company in a contract negotiation dispute. Furthermore, the statute only grants the Secretary authority to exclude a designated company from bidding on a narrow set of sensitive national security systems. It does not give the Secretary power to prohibit defense contractors from conducting “any commercial activity” with the designated company, as Hegseth declared. The analysis notes that § 3252 is a procurement authority, not a sanctions authority, and that Congress never granted the Pentagon the broad power Hegseth is claiming. The fact that the Pentagon plans to keep using Claude for up to six months after the designation further undermines the assertion that Anthropic poses any genuine supply chain risk. (Just Security)

#QuitGPT: OpenAI Boycott Movement Gains Momentum

A consumer boycott campaign called “QuitGPT” has emerged in response to OpenAI’s Pentagon deal, claiming that more than 1.5 million people have taken action by cancelling ChatGPT subscriptions, sharing boycott messaging on social media, or signing up through the campaign’s website. The movement accuses OpenAI of putting profit before public safety by rushing to fill Anthropic’s vacancy. The campaign is directing users toward alternative AI tools including Claude, Google Gemini, and various open-source options. An in-person protest at OpenAI headquarters in San Francisco is planned for March 3. (Euronews)

Comments

Popular posts from this blog

KnowBe4 - Scam Of The Week: "When Users Add Their Names to a Wall of Shame"

KnowBe4 - Uncovering the Sophisticated Phishing Campaign Bypassing M365 MFA

The Hacker News - ⚡ Weekly Recap: WhatsApp 0-Day, Docker Bug, Salesforce Breach, Fake CAPTCHAs, Spyware App & More