Now Reading: US DoD to Anthropic: compromise AI ethics or be banished from supply chain

Loading
svg

US DoD to Anthropic: compromise AI ethics or be banished from supply chain

NewsFebruary 26, 2026Artifice Prime
svg11

A growing rift between the US Department of Defense (DoD) and Anthropic over how AI can be used by the military has led to Defense Secretary Pete Hegseth issuing a blunt ultimatum: work with us on our terms or risk being banned from Pentagon programs.

According to news site Axios, Hegseth gave Anthropic until Friday, February 27 to agree to its terms during a tense meeting this week. If no agreement is reached, the company would risk being deemed a “supply chain risk,” with Hegseth even threatening to invoke the Cold War-era Defense Production Act to compel cooperation, the report said.

The DoD’s view is that it should be free to use Anthropic’s AI for “all lawful purposes,” regardless of ethical boundaries set by the company itself. Anthropic, by contrast, wants to set narrower guardrails.

“The Department of War’s [DoD’s] relationship with Anthropic is being reviewed. Our nation requires that our partners be willing to help our warfighters win in any fight. Ultimately, this is about our troops and the safety of the American people,” Chief Pentagon Spokesman Sean Parnell told Semafor last week.

The extraordinary stand-off appears to have been prompted by a series of conversations between Anthropic and DoD officials which have generated rising levels of friction. These include a report that Anthropic CEO Dario Amodei had insisted that the DoD respect limits the company had placed on how its AI could be used in certain military contexts.

Matters came to a head in early January when the US military used Anthropic’s Claude LLM in conjunction with technology from Palantir to help plan and execute the operation to capture former Venezuelan president, Nicolás Maduro.

Anthropic staff are believed to have raised questions internally about whether an operation in which dozens of people were killed was consistent with the guardrails set for Claude as part of its recently overhauled safety and ethics Constitution.

Despite this, the fine details of Anthropic’s limits aren’t always clear. Some restrictions are referred to in its September 2025 Acceptable Use Policy (AUP), for example, that it not be used for mass domestic surveillance, to compromise critical infrastructure, or to design or develop weapons.

Beyond that, Amodei himself has alluded to limits in statements and essays or through reported conversations with officials modelling hypothetical situations. This includes his recent call for more regulation of AI: “I think I’m deeply uncomfortable with these decisions [on AI] being made by a few companies, by a few people,” Amodei told the CBS News TV newsmagazine 60 Minutes in November. “And this is one reason why I’ve always advocated for responsible and thoughtful regulation of the technology.”

Supply chain implications

If Hegseth were to make good on the threat to ban Anthropic, this would have major implications for the DoD and its long supply chain. In principle, companies that are part of the broader Defense Industrial Base (DIB) would have to stop using Anthropic’s AI platform in all its forms, including, presumably, the Claude Code Security cyber system launched only this week.

This seems highly unlikely. Banning a US company would be unprecedented; such an action was previously reserved for a small number of foreign companies. Anthropic is also currently one of only two frontier AI models that has achieved Impact Level 6 (IL6) certification for use on classified networks, having been joined only this week by xAI’s Grok.

Ripping Anthropic out altogether is unthinkable, not in the least because it is already tightly integrated with Palantir’s systems, which are also critical to the DoD. More likely, the DoD will simply compel Anthropic to concede ground by invoking the Defense Production Act, the downside of which being that this might sour cooperation in the longer run.

Alternatively, Anthropic will continue to insist on some limits, such as, for example, around using Claude to enable autonomous weapons, while the DoD will simply act as though they will be relaxed at some future point, allowing for an uneasy short-term truce.

The DoD versus Anthropic conflict echoes the confrontation between the FBI and Apple more than a decade ago over access to iPhones after the 2015 San Bernardino mass shooting. In that event, Apple refused to give ground, resulting in a long-running legal struggle. The current US administration seems less willing to be patient.

This article originally appeared on CIO.com.

Original Link:https://www.computerworld.com/article/4137506/us-dod-to-anthropic-compromise-ai-ethics-or-be-banished-from-supply-chain-2.html
Originally Posted: Wed, 25 Feb 2026 19:25:27 +0000

0 People voted this article. 0 Upvotes - 0 Downvotes.

Artifice Prime

Atifice Prime is an AI enthusiast with over 25 years of experience as a Linux Sys Admin. They have an interest in Artificial Intelligence, its use as a tool to further humankind, as well as its impact on society.

svg
svg

What do you think?

It is nice to know your opinion. Leave a comment.

Leave a reply

Loading
svg To Top
  • 1

    US DoD to Anthropic: compromise AI ethics or be banished from supply chain

Quick Navigation