Anthropic seeks to renegotiate its AI deal with US DoD, says report
Anthropic is attempting to renegotiate the terms of its AI contract with the US Department of Defense (DoD). CEO Dario Amodei has been in meetings with Emil Michael, the US under-secretary of defense for research and engineering, to iron out contractual disagreements that led the DoD to mark Anthropic as a supply-chain risk, the Financial Times cited sources as saying.
The disagreements were specifically about clauses the DoD wanted in the contract that would allow it to use Anthropic’s systems to carry out mass domestic surveillance and build autonomous weapons systems, both of which are ethical red lines that Anthropic is not prepared to cross.
The latest renegotiation push appears to stem from Amodei’s recent discussions with investors and backers including Amazon, Nvidia, Lightspeed, and Iconiq, to find a path to de-escalate tensions with the DoD, Reuters reported citing sources familiar with the matter.
The sources also told Reuters that some of Anthropic’s investors were also reaching out to their contacts in Washington to lobby in favor of the model provider.
Separately, the Information Technology Industry Council, an industry group representing major technology companies including Amazon, Nvidia, Apple and OpenAI, has also written to US Defense Secretary Pete Hegseth expressing concern over the fact that the department would go to the extent of marking a US company with the supply-chain risk label over a procurement dispute.
The letter, Reuters reported, even goes to the extent of suggesting that that the move could restrict the government’s access to best-in-class technologies from American firms serving agencies across the federal government.
Rather, the Council suggested that the department should take a continued negotiation approach to solving the issue or select another vendor as the label of supply-chain risk is usually reserved for companies that have been marked as foreign adversaries.
Contractual and technical compromises could save the deal
While tensions have escalated, analysts and legal experts say Amodei’s latest push to renegotiate the contract could offer a pathway for both sides to find common ground.
“A workable compromise is origin (of the data) and use-based allowing bulk analysis where the inputs are foreign signals intelligence, while contractually barring use of Anthropic’s systems to process commercially acquired data of US citizens without an Article III warrant, backed by clear covenants and oversight mechanisms,” said Anandaday Misshra, managing partner of Amlegals, a law firm specializing in AI regulatory intelligence and data protection.
This could work, Misshra noted, as the core dispute is over acquisition of “bulk” data.
While DoD wants an “all lawful purposes” standard, including “analysis of bulk acquired data,” which in practice covers large volumes of commercially available information (CAI), such as US citizens’ location data purchased without warrants under the Third‑Party Doctrine, Anthropic reasonably views model‑driven processing of such data as de facto domestic surveillance, Misshra added.
Echoing Misshra, Greyhound Research chief analyst Sanchit Vir Gogia said the contract language could include provisions for governing mechanisms that provide measurable oversight in the form of immutable audit logs that capture prompts and outputs and periodic compliance reviews that evaluate how models are being used in operational systems.
Analysts also see a resolution in how Anthropic’s models are ultimately deployed within the DoD.
One possible compromise could involve deploying specialized versions of frontier models in tightly controlled environments for specific national security tasks, Gogia said.
Additionally, other controls could include enforcing policy at a gateway layer where requests are screened through identity checks, role-based permissions, and predefined rules before reaching the model, allowing Anthropic to retain safeguards while governments maintain operational oversight, Gogia added.
Similarly, Pareekh Jain, principal analyst at Pareekh Consulting, said that the technical deployment architecture could include inclusion of third-party Red Teams, which could be used to periodically check whether the implemented polices and safeguards continue to be effective as models evolve.
OpenAI is renegotiating as well
OpenAI, which moved quickly to secure a contract with the DoD after Anthropic was effectively barred last week, is also looking to revise the terms of its agreement.
CEO Sam Altman said in a post on X that the deal had been “rushed” and needed to be reworked following criticism online and reports of users uninstalling ChatGPT.
Earlier, OpenAI had published a blog post suggesting its arrangement with the DoD included contractual provisions preventing the use of its models for weapons systems or mass domestic surveillance in the US, positioning the agreement as more restrictive than the one under discussion with Anthropic.
Continuing its efforts to manage the optics around the entire imbroglio, OpenAI has sought to emphasize that its guardrails align with those of Anthropic.
Its executive, Connie LaRossa, who looks after national policy, told delegates at a conference in California on Wednesday that her company shared the same ethical red lines as Anthropic and was working to support efforts to have Anthropic’s supply-chain risk designation removed, Reuters reported.
Advantage Anthropic?
However, if Anthropic and the DoD fail to reach a deal, the legal advantage may remain firmly with the former, Misshra said.
“Anthropic has meaningful legal leverage. A $200 million engagement of this type is likely structured as an Other Transaction Authority (OTA) agreement, which is designed to preserve commercial terms, including Terms of Service and Acceptable Use Policies,” Misshra said.
“The government cannot simply import Federal Acquisition Regulation “Changes” mechanisms to rewrite those terms without courting material breach,” Misshra noted, adding that this can be seen as statutory overreach.
“Under the Administrative Procedure Act (APA), the government must prove Anthropic actually poses a national security risk. Rejecting a contract clause on domestic surveillance doesn’t meet that bar,” Misshra explained.
Anthropic’s board, as a Delaware Public Benefit Corporation, has a statutory duty to advance its stated AI safety public benefit, he said: “Authorizing effectively unrestricted military use, especially for US citizens’ surveillance, would be difficult to reconcile with that duty”.
Capitulation could set a risky precedent for AI vendors
If Anthropic does give in to the DoD’s demands, especially given Amodei’s public refusal until now to cross his company’s ethical red lines, it could set a risky precedent for the company and its peers, analysts and experts say.
“If Anthropic capitulates, it will set a precedent that commercial Acceptable Use Policies are effectively waivable under government pressure. DoD would be seen as free to accept ethical guardrails to access frontier capabilities, then later use tools like supply‑chain risk designations to strip those limits,” Misshra said.
“That dynamic incentivizes a race to the bottom, favoring contractors willing to abandon internal safety and human‑rights policies. For a company expressly structured around responsible AI, helping to establish that precedent is both strategically and legally risky,” Misshra added.
Capitulation to the US government could also risk the company’s brand image and erode the company’s credibility with independent users and developers who recently migrated from ChatGPT partly because of trust in Anthropic’s stated values, said Jain.
On the commercial side, Jain added, full concession to the US DoD without strong governance provisions could hurt Anthropic’s enterprise positioning, especially with European clients who are increasingly sensitive to military AI entanglement.
In fact, the European Policy Centre, an independent think tank, has already started raising concerns about the implications for European citizens as artificial intelligence becomes more deeply integrated into surveillance systems and military technologies.
In a blog post addressed to policymakers in the European Union, the think tank pointed to a recent resolution adopted by the United Nations General Assembly that calls on states to ensure human oversight and accountability in the development and deployment of military AI systems.
The resolution urges governments to put in place safeguards to ensure that AI-enabled systems used in defense or security contexts remain consistent with international law, including humanitarian and human rights obligations.
Jitse Goutbeek, an AI Fellow at the Europe’s Political Economy team at the EPC, wrote that such international commitments are particularly important as governments begin integrating frontier AI models into intelligence, surveillance, and defense planning.
Further, Goutbeek argued that procurement decisions and defense partnerships should increasingly take these commitments into account, suggesting that European governments may need clearer assurances from technology vendors and defense agencies about how human oversight and operational safeguards will be maintained when AI systems are deployed in sensitive national security environments.
Original Link:https://www.computerworld.com/article/4141287/anthropic-seeks-to-renegotiate-its-ai-deal-with-us-dod-says-report.html
Originally Posted: Thu, 05 Mar 2026 17:37:27 +0000












What do you think?
It is nice to know your opinion. Leave a comment.