When Anthropic was courting backers for its latest $30 billion funding round in recent weeks, it approached an unlikely investor: 1789 Capital, a pro-Trump venture firm where one of the president’s sons is a partner.
After weighing a nine-figure investment, 1789 decided not to proceed, citing ideological reasons, according to people familiar with the matter. Among its concerns were Anthropic leaders’ history of criticizing President Trump, the several former Biden administration officials on its payroll and its lobbying for AI regulation, the people said.
Anthropic had no problem ultimately raising the capital it wanted from investors including Peter Thiel’s Founders Fund, Singapore’s GIC and Coatue Management. But its spurned appeal shows that while the startup doesn’t have money problems, it does have a growing political problem.
The startup is the maker of Claude, the only large-language-model that can be used in classified settings, a status from the Defense Department that has been a competitive advantage. Anthropic forged a partnership with data company Palantir in 2024 and won a military contract worth up to $200 million last summer.
But it has recently found itself in the Pentagon’s crosshairs—a conflict that could send shock waves through the U.S. defense complex.
The Pentagon wants to be able to use Anthropic and other AI tools for all lawful purposes. Anthropic, meanwhile, doesn’t want its technology used for operations including domestic surveillance and autonomous lethal activities.
Rivals OpenAI, Google and xAI have agreed in principle to have their models deployed in any lawful use cases, a Pentagon priority, Emil Michael, the undersecretary of war for research and engineering, said in an interview on the sidelines of a defense summit in West Palm Beach, Fla., on Tuesday. Anthropic hasn’t.
“We have to be able to use any model for all lawful use cases. If any one company doesn’t want to accommodate that, that’s a problem for us,” Michael said.
Anthropic and the Defense Department have been at odds for weeks over the contractual terms of how the startup’s technology can be used, The Wall Street Journal previously reported. Claude was used in the January operation to capture former Venezuelan President Nicolás Maduro, the Journal reported. An Anthropic employee asked a Palantir counterpart how Claude was used in the operation.
The standoff between Anthropic and the agency has now spilled into public view. Defense Secretary Pete Hegseth’s Pentagon, which has cast “woke” tech companies as a liability and treats any restrictions as an impediment to military effectiveness, is reviewing its partnership with Anthropic.
A senior defense official said on Tuesday that many at the Pentagon increasingly view Anthropic as a supply-chain risk and might ask contractors and vendors to certify that they don’t use the company’s Claude models. Because that designation is typically used for foreign adversaries, the move would mark an unusual rebuke of a U.S. company.
“Our nation requires that our partners be willing to help our warfighters win in any fight. Ultimately, this is about our troops and the safety of the American people,” chief Pentagon spokesman Sean Parnell said.
An Anthropic spokesman said the company “is committed to using frontier AI in support of U.S. national security. That’s why we were the first frontier AI company to put our models on classified networks and the first to provide customized models for national security customers.” Anthropic is having “productive conversations, in good faith,” with the Pentagon on how to continue that work, he said.
Classified work
Anthropic has sought to distinguish itself from its main rival, OpenAI—from which its founders defected in 2020—by focusing on serving the enterprise market. In that field, there is no enterprise bigger than the U.S. government, and within the federal government, the Defense Department is a prized pathway to lucrative contracts.
It signed an early partnership with Palantir and became the first LLM cleared to work on classified material. Anthropic has previously touted its work in the national-security area, creating an advisory group last year featuring former senators and security officials and showcasing its government work during a full-day event at Union Station.
OpenAI, by contrast, is currently going through the process of becoming certified to handle classified material.
Still, Anthropic’s focus on AI safety, policy views and ties to Democrats have irked the Trump administration.
Anthropic CEO Dario Amodei compared Trump to a “feudal warlord” in a pre-election Facebook post supporting Kamala Harris in 2024. Speaking at the World Economic Forum in Davos in January, he criticized the Trump administration’s chip sales policies.
His sister, Daniela Amodei, Anthropic’s president, recently posted on LinkedIn in response to ICE’s killing of American citizens in Minneapolis, saying that “is not what America stands for.” She praised Trump and others for calling for an investigation.
Anthropic has hired several former Biden administration officials, including former Biden AI adviser Ben Buchanan and former National Security Council official for technology Tarun Chhabra, who helps oversee the company’s work with the Pentagon.
Sending a message
The company’s approach has drawn heat from David Sacks, Trump’s AI czar, and others in the administration who say they oppose so-called “woke” AI. Dario Amodei has said the company isn’t woke and that Anthropic doesn’t have political motivations.
Tensions between Anthropic and the Trump administration began bubbling up after the company last summer won a contract worth up to $200 million.
A Jan. 9 AI strategy memo from Hegseth emphasizes that the Pentagon needs to be able to “utilize models free from usage policy constraints that may limit lawful military applications.”
Hegseth is using the recent Anthropic dispute to send a message, a defense official said.
Tech experts say that labeling Anthropic a supply-chain risk would hinder the military’s AI capabilities and set a bad precedent for its work with other companies.
“It would be hard to think of a more strategically unwise move for the U.S. military to make in the AI competition,” said Dean Ball, a senior fellow at the center-right Foundation for American Innovation think tank, who left his role as an AI policy adviser in the Trump administration last year.
Meanwhile, tech industry observers say the eagerness to do business with the Pentagon is a reversal from attitudes less than a decade ago, when thousands of Google employees signed a petition protesting some of the company’s work for the Department of Defense.
At the Tuesday West Palm Beach event focused on defense tech, Omeed Malik, president of 1789 Capital, which opted not to invest in Anthropic, joked that the tech industry has come a long way in embracing defense. But not all are onboard, he said.
“I’m not gonna embarrass anyone, cough, Anthropic,” Malik said. The crowd burst into laughter.
Write to Keach Hagey at Keach.Hagey@wsj.com, Amrith Ramkumar at amrith.ramkumar@wsj.com, Deborah Acosta at deborah.acosta@wsj.com and Vera Bergengruen at vera.bergengruen@wsj.com
