Anthropic vs Pentagon = Revenue wipeout

That the Pentagon has labelled Anthropic a “supply chain risk” can wipe out up to 100% revenue of the Claude maker in 2026.

Leslie D'Monte
Published13 Mar 2026, 10:59 AM IST
From left: Anthropic founder Dario Amodei, US President Donald Trump and OpenAI CEO Sam Altman.
From left: Anthropic founder Dario Amodei, US President Donald Trump and OpenAI CEO Sam Altman.

On 5 March, Anthropic PBC said it would challenge the US Department of War in court after being labelled a “supply chain risk”.

The maker of Claude AI said no private company should be involved in operational military decision-making, stressing that its concerns relate only to fully autonomous weapons and mass domestic surveillance, not battlefield operations. Four days later, Anthropic filed two lawsuits against the DoW and other federal agencies, calling the Trump administration’s move “unprecedented and unlawful” and a violation of free speech.

The company told the court that hundreds of millions of dollars in revenue could be at risk. Its CFO said revenue from defence contractors and others reliant on the DoD could fall 50-100% if the designation stands.

Meanwhile, more than 30 employees from OpenAI and Google DeepMind have filed an amicus brief supporting Anthropic’s lawsuit. Among those named are Jeff Dean and researchers including Zhengdong Wang, Alexander Matt Turner and Noah Siegel, who argue that penalising a leading US AI firm could harm the country’s scientific and industrial competitiveness.

The dispute underscores the growing friction between national security priorities and the rapidly expanding AI industry. As governments increasingly treat Advanced AI as strategic infrastructure, decisions on procurement, security vetting and supply-chain risk could shape which companies gain access to lucrative defence contracts, and which are sidelined.

For AI firms, the outcome could set an important precedent on how far governments can go in restricting commercial players from sensitive national security ecosystems. In this context, you may also read:

In this week’s edition of Tech Talk:

  • Why GPT-5.4 signals the next leap in Autonomous AI
  • AI Tool Of The Week: Claude Legal plugin
  • Are you better off as a cook or plumber in this AI age?

GPT-5.4: The next leap in Autonomous AI?

GPT-5.4 is described by OpenAI Inc. as its most token-efficient reasoning model, delivering faster responses and lower costs. It comes in two versions: GPT-5.4 and the higher-end GPT-5.4 Pro.

View full Image
.

The large language model can interpret screenshots, browse the web, and execute keyboard and mouse commands across apps, enabling multistep workflows that previously required human input.

GPT-5.4 Thinking can outline its reasoning plan upfront, letting users adjust direction mid-response. It also strengthens deep web research for highly specific queries and maintains context better during longer reasoning tasks, producing faster, more relevant answers.

View full Image
.

Alongside improvements in reasoning, coding and professional knowledge work, GPT-5.4 enables more reliable agents, faster developer workflows and higher-quality outputs across ChatGPT, the API and OpenAI Codex.

In Codex and the API, GPT-5.4 is the first general-purpose model that allows agents to operate computers and run complex workflows across applications. It supports up to one million tokens of context, enabling planning and execution across long tasks.

GPT-5.4 also improves how models interact with large tool ecosystems through tool search, helping agents find and use the right tools more efficiently.

According to OpenAI, GPT-5.4 is its most factual model yet. On the MMMU-Pro benchmark for visual reasoning, GPT-5.4 scores 81.2%, up from GPT-5.2’s 79.5%. On Toolathlon, which measures how well AI agents use real-world tools and APIs, GPT-5.4 achieves higher accuracy than GPT-5.2 in fewer turns.

Even so, OpenAI faces resistance from rivals such as Google’s Gemini and Anthropic’s Claude. The chart below tells you the story.

View full Image
.

AI TOOL OF THE WEEK

𝚃𝚑𝚎 𝙰𝙸 𝚑𝚊𝚌𝚔 𝚠𝚎 𝚞𝚗𝚕𝚘𝚌𝚔𝚎𝚍 𝚝𝚘𝚍𝚊𝚢 𝚒𝚜 𝚋𝚊𝚜𝚎𝚍 𝚘𝚗 𝙲𝚕𝚊𝚞𝚍𝚎 𝙻𝚎𝚐𝚊𝚕 𝚙𝚕𝚞𝚐𝚒𝚗

𝚆𝚑𝚊𝚝 𝚙𝚛𝚘𝚋𝚕𝚎𝚖 𝚍𝚘𝚎𝚜 𝚒𝚝 𝚜𝚘𝚕𝚟𝚎? 𝙻𝚎𝚐𝚊𝚕 𝚝𝚎𝚊𝚖𝚜 𝚏𝚊𝚌𝚎 𝚊 𝚚𝚞𝚒𝚎𝚝 𝚌𝚛𝚒𝚜𝚒𝚜: 𝚖𝚘𝚞𝚗𝚝𝚊𝚒𝚗𝚜 𝚘𝚏 𝚛𝚘𝚞𝚝𝚒𝚗𝚎 𝚌𝚘𝚗𝚝𝚛𝚊𝚌𝚝𝚜, 𝙽𝙳𝙰𝚜 𝚊𝚗𝚍 𝚌𝚘𝚖𝚙𝚕𝚒𝚊𝚗𝚌𝚎 𝚍𝚘𝚌𝚞𝚖𝚎𝚗𝚝𝚜 𝚝𝚑𝚊𝚝 𝚍𝚎𝚖𝚊𝚗𝚍 𝚌𝚊𝚛𝚎𝚏𝚞𝚕 𝚛𝚎𝚟𝚒𝚎𝚠 𝚋𝚞𝚝 𝚌𝚘𝚗𝚜𝚞𝚖𝚎 𝚑𝚞𝚐𝚎 𝚝𝚒𝚖𝚎 𝚊𝚗𝚍 𝚖𝚘𝚗𝚎𝚢. 𝙰 𝚜𝚒𝚗𝚐𝚕𝚎 𝚟𝚎𝚗𝚍𝚘𝚛 𝚌𝚘𝚗𝚝𝚛𝚊𝚌𝚝 𝚌𝚊𝚗 𝚝𝚊𝚔𝚎 𝚝𝚠𝚘 𝚑𝚘𝚞𝚛𝚜; 𝟻𝟶 𝙽𝙳𝙰𝚜 𝚌𝚊𝚗 𝚖𝚎𝚊𝚗 𝚍𝚊𝚢𝚜 𝚘𝚏 𝚙𝚊𝚛𝚊𝚕𝚎𝚐𝚊𝚕 𝚠𝚘𝚛𝚔. 𝚈𝚎𝚝 𝚖𝚒𝚜𝚜𝚒𝚗𝚐 𝚊 𝚜𝚒𝚗𝚐𝚕𝚎 𝙶𝙳𝙿𝚁 𝚌𝚕𝚊𝚞𝚜𝚎, 𝚘𝚟𝚎𝚛𝚕𝚘𝚘𝚔𝚒𝚗𝚐 𝚊𝚗 𝚊𝚞𝚝𝚘-𝚛𝚎𝚗𝚎𝚠𝚊𝚕 𝚝𝚛𝚊𝚙 𝚘𝚛 𝚒𝚐𝚗𝚘𝚛𝚒𝚗𝚐 𝚊𝚗 𝚎𝚡𝚙𝚒𝚛𝚎𝚍 𝚌𝚎𝚛𝚝𝚒𝚏𝚒𝚌𝚊𝚝𝚒𝚘𝚗 𝚌𝚊𝚗 𝚙𝚛𝚘𝚟𝚎 𝚌𝚘𝚜𝚝𝚕𝚢.

𝚃𝚑𝚒𝚜 𝚒𝚜 𝚗𝚘𝚝 𝚓𝚞𝚜𝚝 𝚊 𝙱𝚒𝚐𝙻𝚊𝚠 𝚙𝚛𝚘𝚋𝚕𝚎𝚖. 𝙸𝚗-𝚑𝚘𝚞𝚜𝚎 𝚝𝚎𝚊𝚖𝚜 𝚊𝚝 𝚖𝚒𝚍-𝚖𝚊𝚛𝚔𝚎𝚝 𝚏𝚒𝚛𝚖𝚜, 𝚌𝚘𝚖𝚙𝚕𝚒𝚊𝚗𝚌𝚎 𝚘𝚏𝚏𝚒𝚌𝚎𝚛𝚜 𝚊𝚗𝚍 𝙷𝚁 𝚍𝚎𝚙𝚊𝚛𝚝𝚖𝚎𝚗𝚝𝚜 𝚏𝚊𝚌𝚎 𝚝𝚑𝚎 𝚜𝚊𝚖𝚎 𝚋𝚘𝚝𝚝𝚕𝚎𝚗𝚎𝚌𝚔 𝚍𝚊𝚒𝚕𝚢. 𝚃𝚑𝚎 𝚠𝚘𝚛𝚔 𝚒𝚜 𝚛𝚎𝚙𝚎𝚝𝚒𝚝𝚒𝚟𝚎, 𝚑𝚒𝚐𝚑-𝚜𝚝𝚊𝚔𝚎𝚜 𝚊𝚗𝚍 𝚎𝚡𝚙𝚎𝚗𝚜𝚒𝚟𝚎 𝚝𝚘 𝚘𝚞𝚝𝚜𝚘𝚞𝚛𝚌𝚎. 𝚃𝚑𝚎 s𝚙𝚎𝚌𝚒𝚊𝚕𝚒𝚜𝚎𝚍 𝚕𝚎𝚐𝚊𝚕 𝙰𝙸 𝚝𝚘𝚘𝚕𝚜 𝚛𝚎𝚖𝚊𝚒𝚗 𝚞𝚗𝚊𝚏𝚏𝚘𝚛𝚍𝚊𝚋𝚕𝚎 𝚏𝚘𝚛 𝚖𝚊𝚗𝚢. 𝚃𝚑𝚎 𝙲𝚕𝚊𝚞𝚍𝚎 𝙻𝚎𝚐𝚊𝚕 𝚙𝚕𝚞𝚐𝚒𝚗 𝚊𝚒𝚖𝚜 𝚝𝚘 𝚌𝚑𝚊𝚗𝚐𝚎 𝚝𝚑𝚊𝚝 𝚎𝚚𝚞𝚊𝚝𝚒𝚘𝚗.

𝙷𝚘𝚠 𝚝𝚘 𝚊𝚌𝚌𝚎𝚜𝚜: 𝚌𝚕𝚊𝚞𝚍𝚎.𝚊𝚒 𝚘𝚛 𝙲𝚕𝚊𝚞𝚍𝚎 𝙳𝚎𝚜𝚔𝚝𝚘𝚙 𝚊𝚙𝚙 𝚏𝚘𝚛 𝚊𝚐𝚎𝚗𝚝𝚒𝚌 𝙲𝚘𝚠𝚘𝚛𝚔 𝚖𝚘𝚍𝚎.

𝙲𝚕𝚊𝚞𝚍𝚎 𝙻𝚎𝚐𝚊𝚕 𝙰𝙸 𝚌𝚊𝚗 𝚑𝚎𝚕𝚙 𝚢𝚘𝚞:

𝚁𝚎𝚟𝚒𝚎𝚠 𝚌𝚘𝚗𝚝𝚛𝚊𝚌𝚝𝚜 𝚊𝚞𝚝𝚘𝚖𝚊𝚝𝚒𝚌𝚊𝚕𝚕𝚢, 𝚞𝚜𝚒𝚗𝚐 /𝚛𝚎𝚟𝚒𝚎𝚠-𝚌𝚘𝚗𝚝𝚛𝚊𝚌𝚝: U𝚙𝚕𝚘𝚊𝚍 𝚊 𝚟𝚎𝚗𝚍𝚘𝚛 𝚊𝚐𝚛𝚎𝚎𝚖𝚎𝚗𝚝 𝚊𝚗𝚍 𝚛𝚎𝚌𝚎𝚒𝚟𝚎 𝚊 𝙶𝚁𝙴𝙴𝙽/𝚈𝙴𝙻𝙻𝙾𝚆/𝚁𝙴𝙳 𝚛𝚒𝚜𝚔 𝚖𝚊𝚝𝚛𝚒𝚡 𝚠𝚒𝚝𝚑 𝚜𝚙𝚎𝚌𝚒𝚏𝚒𝚌 𝚛𝚎𝚍𝚕𝚒𝚗𝚎 𝚜𝚞𝚐𝚐𝚎𝚜𝚝𝚒𝚘𝚗𝚜, 𝚒𝚗 𝚖𝚒𝚗𝚞𝚝𝚎𝚜.

𝚃𝚛𝚒𝚊𝚐𝚎 𝙽𝙳𝙰𝚜 𝚒𝚗 𝚋𝚞𝚕𝚔 𝚞𝚜𝚒𝚗𝚐 /𝚝𝚛𝚒𝚊𝚐𝚎-𝚗𝚍𝚊: P𝚛𝚘𝚌𝚎𝚜𝚜 𝚍𝚘𝚣𝚎𝚗𝚜 𝚘𝚏 𝙽𝙳𝙰𝚜 𝚊𝚝 𝚘𝚗𝚌𝚎, 𝚊𝚞𝚝𝚘-𝚌𝚊𝚝𝚎𝚐𝚘𝚛𝚒𝚜𝚎 𝚝𝚑𝚎𝚖, 𝚊𝚗𝚍 𝚐𝚎𝚗𝚎𝚛𝚊𝚝𝚎 𝚊 𝚜𝚞𝚖𝚖𝚊𝚛𝚢 𝚛𝚎𝚙𝚘𝚛𝚝 𝚠𝚒𝚝𝚑𝚘𝚞𝚝 𝚝𝚘𝚞𝚌𝚑𝚒𝚗𝚐 𝚎𝚊𝚌𝚑 𝚏𝚒𝚕𝚎 𝚒𝚗𝚍𝚒𝚟𝚒𝚍𝚞𝚊𝚕𝚕𝚢.

𝙶𝚎𝚗𝚎𝚛𝚊𝚝𝚎 𝚛𝚎𝚐𝚞𝚕𝚊𝚝𝚘𝚛𝚢 𝚋𝚛𝚒𝚎𝚏𝚒𝚗𝚐𝚜 𝚞𝚜𝚒𝚗𝚐 /𝚋𝚛𝚒𝚎𝚏: G𝚎𝚝 𝚜𝚝𝚛𝚞𝚌𝚝𝚞𝚛𝚎𝚍 𝚍𝚊𝚒𝚕𝚢 𝚞𝚙𝚍𝚊𝚝𝚎𝚜 𝚘𝚗 𝚝𝚘𝚙𝚒𝚌𝚜 𝚕𝚒𝚔𝚎 𝙴𝚄 𝙰𝙸 𝙰𝚌𝚝 𝚍𝚎𝚊𝚍𝚕𝚒𝚗𝚎𝚜 𝚘𝚛 𝙶𝙳𝙿𝚁 𝚌𝚑𝚊𝚗𝚐𝚎𝚜, 𝚏𝚘𝚛𝚖𝚊𝚝𝚝𝚎𝚍 𝚏𝚘𝚛 𝚎𝚡𝚎𝚌𝚞𝚝𝚒𝚟𝚎 𝚍𝚒𝚜𝚝𝚛𝚒𝚋𝚞𝚝𝚒𝚘𝚗.

𝚆𝚑𝚊𝚝 𝚖𝚊𝚔𝚎𝚜 𝙲𝚕𝚊𝚞𝚍𝚎 𝙻𝚎𝚐𝚊𝚕 𝙰𝙸 𝚜𝚝𝚊𝚗𝚍 𝚘𝚞𝚝?

•⁠ 𝚈𝚘𝚞𝚛 𝚙𝚕𝚊𝚢𝚋𝚘𝚘𝚔, 𝚗𝚘𝚝 𝚊 𝚐𝚎𝚗𝚎𝚛𝚒𝚌 𝚝𝚎𝚖𝚙𝚕𝚊𝚝𝚎: 𝙴𝚟𝚎𝚛𝚢 𝚛𝚎𝚟𝚒𝚎𝚠 𝚒𝚜 𝚖𝚎𝚊𝚜𝚞𝚛𝚎𝚍 𝚊𝚐𝚊𝚒𝚗𝚜𝚝 𝚝𝚑𝚎 𝚜𝚝𝚊𝚗𝚍𝚊𝚛𝚍𝚜 𝚢𝚘𝚞 𝚍𝚎𝚏𝚒𝚗𝚎, 𝚖𝚊𝚔𝚒𝚗𝚐 𝚘𝚞𝚝𝚙𝚞𝚝𝚜 𝚌𝚘𝚗𝚜𝚒𝚜𝚝𝚎𝚗𝚝 𝚠𝚒𝚝𝚑 𝚢𝚘𝚞𝚛 𝚘𝚛𝚐𝚊𝚗𝚒𝚜𝚊𝚝𝚒𝚘𝚗’𝚜 𝚊𝚌𝚝𝚞𝚊𝚕 𝚛𝚒𝚜𝚔 𝚝𝚘𝚕𝚎𝚛𝚊𝚗𝚌𝚎, 𝚗𝚘𝚝 𝚊 𝚘𝚗𝚎-𝚜𝚒𝚣𝚎-𝚏𝚒𝚝𝚜-𝚊𝚕𝚕 𝚏𝚛𝚊𝚖𝚎𝚠𝚘𝚛𝚔.

•⁠ 𝙰𝚐𝚎𝚗𝚝𝚒𝚌 𝚋𝚊𝚝𝚌𝚑 𝚙𝚛𝚘𝚌𝚎𝚜𝚜𝚒𝚗𝚐: 𝙸𝚗 𝙲𝚘𝚠𝚘𝚛𝚔 𝚖𝚘𝚍𝚎, 𝚢𝚘𝚞 𝚑𝚊𝚗𝚍 𝚘𝚏𝚏 𝚊 𝚏𝚘𝚕𝚍𝚎𝚛 𝚘𝚏 𝚍𝚘𝚌𝚞𝚖𝚎𝚗𝚝𝚜 𝚊𝚗𝚍 𝚛𝚎𝚝𝚞𝚛𝚗 𝚝𝚘 𝚏𝚒𝚗𝚒𝚜𝚑𝚎𝚍, 𝚌𝚊𝚝𝚎𝚐𝚘𝚛𝚒𝚜𝚎𝚍 𝚠𝚘𝚛𝚔 — 𝚗𝚘𝚝 𝚊 𝚝𝚞𝚛𝚗-𝚋𝚢-𝚝𝚞𝚛𝚗 𝚌𝚘𝚗𝚟𝚎𝚛𝚜𝚊𝚝𝚒𝚘𝚗.

•⁠ 𝙳𝚛𝚊𝚖𝚊𝚝𝚒𝚌𝚊𝚕𝚕𝚢 𝚖𝚘𝚛𝚎 𝚊𝚌𝚌𝚎𝚜𝚜𝚒𝚋𝚕𝚎: 𝙸𝚝 𝚞𝚗𝚍𝚎𝚛𝚌𝚞𝚝𝚜 𝚜𝚙𝚎𝚌𝚒𝚊𝚕𝚒𝚜𝚎𝚍 𝚕𝚎𝚐𝚊𝚕 𝙰𝙸 𝚝𝚘𝚘𝚕𝚜 𝚜𝚒𝚐𝚗𝚒𝚏𝚒𝚌𝚊𝚗𝚝𝚕𝚢, 𝚋𝚛𝚒𝚗𝚐𝚒𝚗𝚐 𝚌𝚊𝚙𝚊𝚋𝚒𝚕𝚒𝚝𝚒𝚎𝚜 𝚙𝚛𝚎𝚟𝚒𝚘𝚞𝚜𝚕𝚢 𝚊𝚟𝚊𝚒𝚕𝚊𝚋𝚕𝚎 𝚘𝚗𝚕𝚢 𝚝𝚘 𝚕𝚊𝚛𝚐𝚎 𝚕𝚊𝚠 𝚏𝚒𝚛𝚖𝚜 𝚠𝚒𝚝𝚑𝚒𝚗 𝚛𝚎𝚊𝚌𝚑 𝚘𝚏 𝚒𝚗-𝚑𝚘𝚞𝚜𝚎 𝚝𝚎𝚊𝚖𝚜 𝚊𝚗𝚍 𝚖𝚒𝚍-𝚖𝚊𝚛𝚔𝚎𝚝 𝚘𝚛𝚐𝚊𝚗𝚒𝚜𝚊𝚝𝚒𝚘𝚗𝚜.

𝙽𝚘𝚝𝚎: 𝚃𝚑𝚎 𝚝𝚘𝚘𝚕𝚜 𝚊𝚗𝚍 𝚊𝚗𝚊𝚕𝚢𝚜𝚒𝚜 𝚏𝚎𝚊𝚝𝚞𝚛𝚎𝚍 𝚒𝚗 𝚝𝚑𝚒𝚜 𝚜𝚎𝚌𝚝𝚒𝚘𝚗 𝚍𝚎𝚖𝚘𝚗𝚜𝚝𝚛𝚊𝚝𝚎𝚍 𝚌𝚕𝚎𝚊𝚛 𝚟𝚊𝚕𝚞𝚎 𝚋𝚊𝚜𝚎𝚍 𝚘𝚗 𝚘𝚞𝚛 𝚒𝚗𝚝𝚎𝚛𝚗𝚊𝚕 𝚝𝚎𝚜𝚝𝚒𝚗𝚐. 𝙾𝚞𝚛 𝚛𝚎𝚌𝚘𝚖𝚖𝚎𝚗𝚍𝚊𝚝𝚒𝚘𝚗𝚜 𝚊𝚛𝚎 𝚎𝚗𝚝𝚒𝚛𝚎𝚕𝚢 𝚒𝚗𝚍𝚎𝚙𝚎𝚗𝚍𝚎𝚗𝚝 𝚊𝚗𝚍 𝚗𝚘𝚝 𝚒𝚗𝚏𝚕𝚞𝚎𝚗𝚌𝚎𝚍 𝚋𝚢 𝚝𝚑𝚎 𝚝𝚘𝚘𝚕 𝚌𝚛𝚎𝚊𝚝𝚘𝚛𝚜.

AI BITS & BYTES

Are you better off as a cook or plumber in this AI age? While earlier reports from the World Economic Forum and Microsoft Corp. suggest that blue-collar workers may weather the AI tsunami better than many white-collar roles, a new study by Anthropic reinforces the trend.

View full Image
.

It finds that cooks, motorcycle mechanics, lifeguards, bartenders and dishwashers face the least risk from AI, while computer programmers, customer service agents and data-entry operators are among the most exposed.

View full Image
.

The study combines data from the O*NET occupational database — Anthropic’s own economic index — and research by Tyna Eloundou on whether large language models can significantly accelerate workplace tasks.

You can find the full Anthropic report, here.

AI creates 46 new billionaires, but at a cost: AI is continuing to create new billionaires, with 114 of the world’s 4,020 billionaires now linked to AI companies. However, this wealth creation coincides with a parallel wealth destruction, according to the Hurun Global Rich List 2026. The number of AI billionaires currently stands at 114, with a little less than half (46) being new entrants to the list, suggesting a breakneck pace of wealth creation.

AI models can reveal your real identity, warn researchers: A new study has warned that Generative AI has made it vastly easier for malicious hackers to uncover the real identities of anonymous social media users.

According to a report by The Guardian citing the study, large language models can now successfully match anonymous online profiles with actual identities by synthesising seemingly harmless information posted across different platforms.

The report notes that AI researchers Simon Lermen and Daniel Paleka found that the technology behind platforms like ChatGPT makes sophisticated privacy attacks highly cost-effective. The researchers ran an experiment where they fed anonymous accounts into an AI and instructed it to scrape all available information it could.

OpenAI’s robotics head quits after Pentagon deal: Caitlin Kalinowski, head of OpenAI’s robotics team has resigned from the position. In a post on X (formerly Twitter), she said the decision was directly tied to the company’s new deal with the Department of War. “I resigned from OpenAI. I care deeply about the robotics team and the work we built together. This wasn’t an easy call,” she wrote.

View full Image
Caitlin Kalinowski, former head of OpenAI’s robotics team.

Microsoft integrates Anthropic AI into Copilot: Microsoft is integrating Anthropic’s AI tools into its Copilot platform to meet rising interest in autonomous agents, shortly after the startup’s latest innovations triggered a decline in software equities.

‘Copilot Cowork’, a utility modelled after Anthropic’s popular Claude Cowork system, has intrigued Silicon Valley through its capacity to manage intricate operations like app development, spreadsheet construction and large-scale data management with minimal human intervention, reported Reuters.

Tech Talk is a weekly newsletter by Leslie D'Monte on everything happening in the world of technology and AI. Want this delivered straight to your inbox? Subscribe here.

About the Author

Leslie D'Monte, author of "AI Rising", is a tech and science writer with stints at top media houses. An MIT-Knight Fellow and TEDx speaker, he covers AI, deeptech, and digital policy, curates tech events, and hosts podcasts and Mint's Tech Talk newsletter.

Get Latest real-time updates

Catch all the Business News, Market News, Breaking News Events and Latest News Updates on Live Mint. Download The Mint News App to get Daily Market Updates.

HomeNewslettersTech TalkAnthropic vs Pentagon = Revenue wipeout
More