Pentagon ‘fed up’ with Anthropic pushback over Claude AI model use by military, may cut ties, says report

The Pentagon is considering cutting ties with AI firm Anthropic due to disagreements over military use limitations of its models. Anthropic's $200 million contract could be impacted over its pushback on autonomous weapons and domestic surveillance.

Livemint
Updated15 Feb 2026, 10:31 AM IST
The Pentagon has reportedly threatened to cut ties with Anthropic over its insistence on maintaining some limitations over how the US military uses its AI models, according to Axios.
The Pentagon has reportedly threatened to cut ties with Anthropic over its insistence on maintaining some limitations over how the US military uses its AI models, according to Axios.(Photographer: Gabby Jones / Bloomberg)

The United States Department of Defense has reportedly threatened to cut ties with artificial intelligence company Anthropic over its insistence on some limits for use of its AI models by the US military, according to an Axios report, citing an official from the Trump administration.

It said that the Pentagon is “fed up” with Anthropic's months-long negotiations against the US government's push toward full military use of AI company tools for “all lawful purposes”. This includes use even in “most sensitive areas of weapons development, intelligence collection, and battlefield operations”, as per the report.

Reuters reporting on the same could not verify the development.

According to a report by The Wall Street Journal, Anthropic’s contract with the Pentagon is worth around $200 million.

Also Read | ‘Got India’s commitment to stop buying additional Russian oil,’ says Marco Rubio

Why does Pentagon want full control over use and applications of AI models?

The two sticking points for Anthropic are — fully autonomous weapons and mass surveillance of Americans, it added. Notably, the Pentagon has contracts with Anthropic, Alphabet (Google), OpenAI and Elon Musk's xAI.

The source told Axios that the categories under dispute have “considerable grey area around what would and wouldn't fall into” them, and the Pentagon is not willing to negotiate case-by-case with Anthropic or have its AI model Claude block some processes unexpectedly.

On whether the department could cut the company off its roster, the official said that “everything's on the table… but there'll have to be an orderly replacement (for) them, if we think that's the right answer.”

In a statement to Axios, Anthropic said it remains “committed to using frontier AI in support of US national security”. The company's usage guidelines explicitly state that Claude is prohibited for use in facilitating violence, developing weapons, or conducting surveillance.

Also Read | Sridhar Vembu compares Big Tech to East India Company again: ‘Bigger than most…’

Pentagon vs Anthropic: Use during Maduro capture sparked concerns?

Last month, Reuters reported that Anthropic and the Pentagon clashed over safeguards that would prevent the government from deploying its AI model to target weapons autonomously and conduct domestic surveillance in the US.

Notably, the WSJ on 14 February reported that Anthropic’s Claude AI was used by the US during its operation to capture former Venezuelan President Nicolás Maduro. The application reportedly came through Anthropic’s partnership with Palantir, whose tools are widely used by the US Department of Defense and federal law-enforcement agencies.

An Anthropic spokesperson told WSJ, “We cannot comment on whether Claude, or any other AI model, was used for any specific operation, classified or otherwise. Any use of Claude—whether in the private sector or across government—is required to comply with our Usage Policies, which govern how Claude can be deployed. We work closely with our partners to ensure compliance.”

Also Read | Bank holidays: Are banks closed for Losar and Shivaji Jayanti next week?

Can US military replace Claude with other AI players?

According to the Axios report, a quick swap would be difficult as the other models do not yet have the same network settings for use in specialized government applications. The official said "the other model companies are just behind" Claude, with the report noting that it was the first model brought into the Pentagon's classified networks.

Further, ChatGPT (OpenAI), Gemini (Google) and xAI (Grok) are all used in unclassified settings. These have agreed to forego their usual safeguards for work with the Pentagon and negotiations are on to shift them into the classified space, the report added. On whether they have agreed to the “all lawful purposes” term, the official said that one has, while two are “showing more flexibility than Anthropic”.

The statement from Anthropic's spokesperson to Axios reiterated their committment to national security: “That's why we were the first frontier AI company to put our models on classified networks and the first to provide customized models for national security customers.”

Key Takeaways
  • The Pentagon is reportedly frustrated with Anthropic's restrictions on AI model usage for military applications.
  • Anthropic's insistence on some limitations over military use, particularly concerning autonomous weapons, is at odds with Pentagon demands.
  • Anthropic's $200 million contract could be impacted over its pushback on autonomous weapons and domestic surveillance.

About the Author

For about a decade, Livemint—News Desk has been a credible source for authentic and timely news, and well-researched analysis on national news, busine...Read More

Get Latest real-time updates

Catch all the Business News , Corporate news , Breaking News Events and Latest News Updates on Live Mint. Download The Mint News App to get Daily Market Updates.

Business NewsCompaniesNewsPentagon ‘fed up’ with Anthropic pushback over Claude AI model use by military, may cut ties, says report
More