Now Reading: Anthropic’s US gov’t lawsuit says federal action “unprecedented and unlawful”

Loading
svg

Anthropic’s US gov’t lawsuit says federal action “unprecedented and unlawful”

NewsMarch 10, 2026Artifice Prime
svg9

Anthropic on Monday fought back against the US federal government’s determination that it is a supply chain risk, suing the feds and arguing to a California federal judge that the government is being inconsistent and contradictory.

“The Constitution confers on Anthropic the right to express its views—both publicly and to the government—about the limitations of its own AI services and important issues of AI safety. The government does not have to agree with those views. Nor does it have to use Anthropic’s products,” the lawsuit filing said. “But the government may not employ the power of the State to punish or suppress Anthropic’s disfavored expression.”

The White House has used strong political terms to cast Anthropic as less than patriotic. A White House statement Monday referred to Anthropic as “a radical left, woke company,” and said, “our military will obey the United States Constitution [and] not any woke AI company’s terms of service.”

But Anthropic said that its resistance to two items in the government’s contract– autonomous lethal warfare and mass surveillance of Americans–was entirely technical, based on Anthropic testing showing that “Claude cannot safely or reliably perform those functions.”

“Anthropic has never tested Claude for those uses. Anthropic currently does not have confidence, for example, that Claude would function reliably or safely if used to support lethal autonomous warfare,” the lawsuit said. 

Unexplained inconsistency

The filing also argued that the government’s decision was “arbitrary, capricious, and an abuse of discretion” because “Anthropic had been one of the government’s most trusted partners until its views clashed with the Department’s.”

The filing added: “Until the Department [of Defense] raised this threat, no government official had ever raised a concern with Anthropic about potential supply chain vulnerabilities. On the contrary, the government has consistently provided the security clearances that are necessary for Anthropic’s personnel to perform classified work. Those clearances remain in place today. Moreover, in 2024 Anthropic became the first frontier AI lab to collaborate with the Department of Energy to evaluate an AI model in a Top Secret classified environment.”

The filing also said that the Department of Defense (DoD) “has recognized Claude’s capabilities as ‘exquisite.’ [DoD] suggested that Claude was so vital to our national defense that it needed to be commandeered under the Defense Production Act. And [Defense Secretary Pete Hegseth] has ordered that ‘Anthropic will continue to provide’ its services to the Department of War [a secondary name for the DoD] for up to six months. The ‘unexplained inconsistency’ between simultaneously designating Anthropic’s services a supply chain risk vulnerable to ‘sabotage’ or other ‘subversion’ by a foreign adversary while directing those services to be used for up to six months for national security purposes demonstrates the arbitrariness of the Secretary’s final decision.”

Analysts were of mixed opinions about the implications for enterprise IT leaders, although most said it would force politics into a technological decision.

Uncharted territory

“For Gartner clients, this falls under geopolitical tension, which factors into an organization’s purchasing priorities. In this case, it will likely hurt Anthropic with their government contracts even if the supply chain risk designation is quashed by the courts,” said Nader Henein, a Gartner VP analyst.

“On the other hand,” he observed, “it may help them with non-US buyers who will view their stance as a reassuring sign. When it comes to the wider industry, European clients are paying close attention to the signatories of the EU AI Act code of conduct, which is still missing some notable names such as DeepSeek, Xai and Meta.”

Cole Cioran, managing partner of the Canadian Public Sector at Info-Tech Research Group, added the implications of this will likely go far beyond the courts.

“Anthropic’s challenge to the Pentagon’s supply-chain risk label is more than a legal dispute. It’s a shot that will echo around the world for as long as this is before the courts,” Cioran said. “The debate over how democratic nations will govern AI in the context of sovereignty, security, and ethics has needed a challenge like this to drive clearer standards.”

He pointed out that for countries like Canada, where digital sovereignty and responsible AI “sit at the center of national strategy,” this case becomes “a litmus test for principled leadership.” Anthropic CEO Dario Amodei’s decision to stand firm shows that Anthropic is prepared to defend its principles publicly, despite “an unprecedented national security designation” that could materially restrict its access to US defense markets.

Cioran suggested that this will eventually be a good thing for Anthropic.

“As the proceedings drag on, as they inevitably will, time becomes an asset for Anthropic rather than a liability. In geopolitics, the clock beats the gavel, just as the US vs Microsoft case transformed the company from an aggressive monopoly into a trusted partner. My prediction is that the longer this case runs, the more it will define what credibility looks like for AI vendors on the global stage,” Cioran said.

“This resilience will resonate with governments that require vendors to demonstrate their adherence to core values such as inclusive development practices, environmental protection, and ethical AI governance. However, before Amodei’s stand, vendors largely relied on asserting their own ethical standing,” he said. “Now that Anthropic has taken a stand, evaluators will know what evidence looks like.”

However, Acceligence CIO Yuri Goryunov said one interpretation of the government’s position is that its resistance to Anthropic is because it doesn’t want to risk an AI system interfering with or second-guessing military personnel. But, he noted, if that was truly the concern, it would likely mean a ban of all vendors selling agentic or generative AI systems, because that risk exists for all.

“We are entering uncharted territory here, and this situation requires careful legal and technical assessment. Ultimately, this is about control—who possesses it, and how they exercise it. If a technology is designated a supply chain risk to national security because it is not aligned with US military objectives, several risks emerge,” Goryunov said. “The system might arbitrarily decide to disclose sensitive payment information to the public or an adversary if it determines that such an action would lead to a morally better outcome.”

Nonetheless, the Trump administration’s anti-regulatory advocates need to be consistent, said cybersecurity consultant Brian Levine, executive director of FormerGov, and a former federal prosecutor.

“We can’t have it both ways. If we don’t want heavy‑handed government regulation, then we need to support responsible self‑regulation. Otherwise, we’re sleepwalking into a technological dystopia of our own making,” Levine said. “For organizations, embedding safety constraints isn’t just the ethical choice—it’s the smart economic one. CIOs and CISOs should prioritize vendors that are willing to self‑regulate and they should also maintain backup providers in case sudden or arbitrary government actions disrupt access to their preferred AI platforms.”

And, Levine said, from a purely legal perspective, the government’s position doesn’t make much sense. The fact that Anthropic couldn’t agree to all of the contractual terms “in no way makes them a supply chain or national security risk.”

Original Link:https://www.computerworld.com/article/4142786/anthropics-us-govt-lawsuit-says-federal-action-unprecedented-and-unlawful.html
Originally Posted: Tue, 10 Mar 2026 03:57:37 +0000

0 People voted this article. 0 Upvotes - 0 Downvotes.

Artifice Prime

Atifice Prime is an AI enthusiast with over 25 years of experience as a Linux Sys Admin. They have an interest in Artificial Intelligence, its use as a tool to further humankind, as well as its impact on society.

svg
svg

What do you think?

It is nice to know your opinion. Leave a comment.

Leave a reply

Loading
svg To Top
  • 1

    Anthropic’s US gov’t lawsuit says federal action “unprecedented and unlawful”

Quick Navigation