
The United States Department of Defense has reportedly threatened to cut ties with artificial intelligence company Anthropic over its insistence on some limits for use of its AI models by the US military, according to an Axios report, citing an official from the Trump administration.
It said that the Pentagon is “fed up” with Anthropic's months-long negotiations against the US government's push toward full military use of AI company tools for “all lawful purposes”. This includes use even in “most sensitive areas of weapons development, intelligence collection, and battlefield operations”, as per the report.
Reuters reporting on the same could not verify the development.
According to a report by The Wall Street Journal, Anthropic’s contract with the Pentagon is worth around $200 million.
The two sticking points for Anthropic are — fully autonomous weapons and mass surveillance of Americans, it added. Notably, the Pentagon has contracts with Anthropic, Alphabet (Google), OpenAI and Elon Musk's xAI.
The source told Axios that the categories under dispute have “considerable grey area around what would and wouldn't fall into” them, and the Pentagon is not willing to negotiate case-by-case with Anthropic or have its AI model Claude block some processes unexpectedly.
On whether the department could cut the company off its roster, the official said that “everything's on the table… but there'll have to be an orderly replacement (for) them, if we think that's the right answer.”
In a statement to Axios, Anthropic said it remains “committed to using frontier AI in support of US national security”. The company's usage guidelines explicitly state that Claude is prohibited for use in facilitating violence, developing weapons, or conducting surveillance.
Last month, Reuters reported that Anthropic and the Pentagon clashed over safeguards that would prevent the government from deploying its AI model to target weapons autonomously and conduct domestic surveillance in the US.
Notably, the WSJ on 14 February reported that Anthropic’s Claude AI was used by the US during its operation to capture former Venezuelan President Nicolás Maduro. The application reportedly came through Anthropic’s partnership with Palantir, whose tools are widely used by the US Department of Defense and federal law-enforcement agencies.
An Anthropic spokesperson told WSJ, “We cannot comment on whether Claude, or any other AI model, was used for any specific operation, classified or otherwise. Any use of Claude—whether in the private sector or across government—is required to comply with our Usage Policies, which govern how Claude can be deployed. We work closely with our partners to ensure compliance.”
According to the Axios report, a quick swap would be difficult as the other models do not yet have the same network settings for use in specialized government applications. The official said "the other model companies are just behind" Claude, with the report noting that it was the first model brought into the Pentagon's classified networks.
Further, ChatGPT (OpenAI), Gemini (Google) and xAI (Grok) are all used in unclassified settings. These have agreed to forego their usual safeguards for work with the Pentagon and negotiations are on to shift them into the classified space, the report added. On whether they have agreed to the “all lawful purposes” term, the official said that one has, while two are “showing more flexibility than Anthropic”.
The statement from Anthropic's spokesperson to Axios reiterated their committment to national security: “That's why we were the first frontier AI company to put our models on classified networks and the first to provide customized models for national security customers.”
For about a decade, Livemint—News Desk has been a credible source for authentic and timely news, and well-researched analysis on national news, busine...Read More
Catch all the Business News , Corporate news , Breaking News Events and Latest News Updates on Live Mint. Download The Mint News App to get Daily Market Updates.