AI & War: Principle, Profit and Agentic AI combat

Anthropic out, OpenAI in. Iran war rages on. The Human, in all of this, is not amused.

Leslie D'Monte
Updated6 Mar 2026, 10:16 AM IST
OpenAI CEO Sam Altman (left) and Anthropic’s Dario Amodei at the India AI Summit in New Delhi in February 2026. (AFP)
OpenAI CEO Sam Altman (left) and Anthropic’s Dario Amodei at the India AI Summit in New Delhi in February 2026. (AFP)

Last week, we wrote about Anthropic’s emergence as the poster child of AI. That was short-lived. The US Department of War (formerly, Pentagon) chose the more pliable OpenAI to do business with, even as it threatened to label Claude AI as a “supply-chain risk”, due to Anthropic’s reluctance to allow use of its models without the couple of extra guardrails.

Anthropic said it cannot “in good conscience accede to their (Department of War’s) request”, but as we pointed out last week, the confrontation effectively jeopardises Anthropic’s long-standing defence contracts with the government, and may make its investors jittery till things become clearer.

In this week’s edition of Tech Talk:

  • Citrini vs Citadel and AI’s days of future-past
  • AI Tool of the Week: Lyria 3 in Google Gemini
  • Did Grok predict the Iran-Israel-US war?

A Matter of Principle vs Profit

An Axios article highlighted that Anthropic has raised more than $60 billion from over 200 “venture capital” investors, half of which came in just the last two months. These new investors will expectedly be nervous as US defence contractors—including the likes of Lockheed Martin—are expected to remove Anthropic’s AI tools from their supply chains days after President Donald Trump ordered all federal agencies to immediately stop using them.

Meanwhile, Sam Altman has been making many mixed comments. On the one hand, he admitted OpenAI’s rushed deal with the DoW amid its battle with Anthropic over safety and ethical concerns looked “opportunistic and sloppy”. He said his company wants to work “through democratic processes” and urged DoW to reconsider its decision about Anthropic.

On the other hand, Altman’s latest post says that the ChatGPT maker has made some “additions in our agreement to make our principles very clear…”.

“[We]…shouldn’t have rushed to get this out (referring to the deal with DoW) on Friday. The issues are super complex, and demand clear communication. We were genuinely trying to de-escalate things and avoid a much worse outcome, but I think it just looked opportunistic and sloppy... In my conversations over the weekend, I reiterated that Anthropic should not be designated as a SCR, and that we hope the DoW offers them the same terms we’ve agreed to.”

Expectedly, Amodei has accused Altman of “gaslighting”, saying that Altman is falsely “presenting himself as a peacemaker and dealmaker”. Netizens, too, are not amused. They applauded Anthropic’s “principled” stand, pushing Claude to No.1 on the iOS free app rankings in late January, even as many users uninstalled ChatGPT as a mark of protest. Users on social media platforms like Reddit and X claim to be switching to Claude due to the OpenAI’s “opportunistic” stance.

While these moves are unlikely to change the DoW’s perception of Anthropic, or even trigger a change of heart, it is sending a strong signal to Big Tech and governments that principles and guardrails remain a valuable and treasured commodity.

Beginning of Agentic AI wars

The continuously changing geopolitical scenario with abrupt wars is queering the pitch. In the context of AI, it has brought to fore the question of governments unleashing semi- and fully-autonomous weapons on their enemies.

And It’s with good reason. Unlike traditional AI or even Generative AI, agentic systems act as autonomous officers. Given an objective, they pursue it, adapt when things go wrong, and coordinate with other agents. Agentic AI can thus close the OODA (Observe-Orient-Decide-Act) loop on its own, planning missions, fusing multi-sensor data, adapting mid-course, and executing tasks with minimal human input.

Three conflicts illustrate the spectrum:

  • Israel-Iran saw AI as a Command-and-Control (C2) integration layer synchronising multi-domain strikes, with humans retaining lethal authorisation.
  • Ukraine’s “Operation Spider’s Web” deployed autonomous targeting that activated when human control was jammed with the Gogol-M mothership-subagent architecture approaching true autonomy.
  • The Mexico cartel operation was AI-assisted ISR feeding a human raid but alarmingly, Jalisco New Generation Cartel operatives embedded in Ukrainian drone units are actively reverse-engineering battlefield AI for criminal use.

All these developments signal a decisive shift in how wars may be fought even as regulation hasn’t kept pace because of ambiguity in defining these terms, dual-use concerns, strategic competition, and verification issues.

Moreover, as conflicts in Ukraine and West Asia intensify, questions are mounting about whether multi-agent AI architectures, systems where multiple autonomous agents coordinate and adapt in real time, are already shaping the battlefield.

In the Ukraine war, for instance, both sides have deployed AI-enabled drones, including loitering munitions such as the Shahed 136. These systems can navigate autonomously and strike pre-programmed targets without live piloting. But they rely on narrow AI, computer vision, GPS, and fixed targeting logic, not decentralised multi-agent coordination.

Similarly, Turkey’s Kargu-2 drew global attention after a UN panel suggested it may have operated in an autonomous engagement mode in Libya. Even here, autonomy referred to onboard target recognition and strike capability, not to multiple AI agents dynamically coordinating strategy.

The most advanced demonstrations of multi-agent architectures remain experimental. DARPA’s Offensive Swarm-enabled Tactics (OFFSET) Program tested large drone swarms capable of decentralised coordination and real-time adaptation. Likewise, Pentagon’s Perdix microdrones showcased distributed “swarm intelligence” without a central controller. But these were controlled demonstrations, not confirmed battlefield deployments.

In recent strikes involving Iran, reporting suggests AI models were used for analysis and planning, not autonomous swarm combat. Multi-agent AI implies decentralised decision-making, role allocation, and emergent behaviour under battlefield stress. Public evidence shows militaries are advancing toward that capability but have not yet operationalised fully self-directed, multi-agent lethal systems.

For now, autonomy in war is real. True multi-agent AI warfare remains largely in the lab. In the absence of sufficient guardrails, we must be grateful that it’s so. At least, for now.

AI TOOL OF THE WEEK

by AI&Beyond, with Jaspreet Bindra and Anuj Magazine

𝚃𝚑𝚎 𝙰𝙸 𝚑𝚊𝚌𝚔 𝚠𝚎 𝚞𝚗𝚕𝚘𝚌𝚔𝚎𝚍 𝚝𝚘𝚍𝚊𝚢 𝚒𝚜 𝚋𝚊𝚜𝚎𝚍 𝚘𝚗 𝚊 𝚝𝚘𝚘𝚕: 𝙻𝚢𝚛𝚒𝚊 𝟹 𝚒𝚗 𝙶𝚘𝚘𝚐𝚕𝚎 𝙶𝚎𝚖𝚒𝚗𝚒.

𝚆𝚑𝚊𝚝 𝚙𝚛𝚘𝚋𝚕𝚎𝚖 𝚍𝚘𝚎𝚜 𝚒𝚝 𝚜𝚘𝚕𝚟𝚎? 𝙴𝚟𝚎𝚛𝚢 𝚠𝚎𝚎𝚔, 𝚝𝚎𝚊𝚖𝚜 𝚊𝚌𝚛𝚘𝚜𝚜 𝚖𝚊𝚛𝚔𝚎𝚝𝚒𝚗𝚐, 𝙻&𝙳, 𝚊𝚗𝚍 𝚌𝚘𝚖𝚖𝚞𝚗𝚒𝚌𝚊𝚝𝚒𝚘𝚗𝚜 𝚑𝚒𝚝 𝚝𝚑𝚎 𝚜𝚒𝚖𝚒𝚕𝚊𝚛 𝚒𝚗𝚟𝚒𝚜𝚒𝚋𝚕𝚎 𝚠𝚊𝚕𝚕: 𝚝𝚑𝚎𝚢 𝚑𝚊𝚟𝚎 𝚐𝚛𝚎𝚊𝚝 𝚌𝚘𝚗𝚝𝚎𝚗𝚝, 𝚖𝚊𝚢𝚋𝚎 𝚊 𝚝𝚛𝚊𝚒𝚗𝚒𝚗𝚐 𝚟𝚒𝚍𝚎𝚘, 𝚊 𝚙𝚛𝚘𝚍𝚞𝚌𝚝 𝚕𝚊𝚞𝚗𝚌𝚑 𝚛𝚎𝚎𝚕, 𝚊𝚗 𝚒𝚗𝚝𝚎𝚛𝚗𝚊𝚕 𝚝𝚘𝚠𝚗𝚑𝚊𝚕𝚕 𝚛𝚎𝚌𝚘𝚛𝚍𝚒𝚗𝚐 𝚋𝚞𝚝 𝚗𝚘 𝚋𝚞𝚍𝚐𝚎𝚝, 𝚗𝚘 𝚝𝚒𝚖𝚎, 𝚊𝚗𝚍 𝚗𝚘 𝚎𝚡𝚙𝚎𝚛𝚝𝚒𝚜𝚎 𝚝𝚘 𝚐𝚒𝚟𝚎 𝚒𝚝 𝚊 𝚜𝚘𝚞𝚗𝚍𝚝𝚛𝚊𝚌𝚔 𝚝𝚑𝚊𝚝 𝚊𝚌𝚝𝚞𝚊𝚕𝚕𝚢 𝚏𝚒𝚝𝚜. 𝚂𝚘, 𝚝𝚑𝚎𝚢 𝚎𝚒𝚝𝚑𝚎𝚛 𝚙𝚊𝚢 𝚏𝚘𝚛 𝚜𝚝𝚘𝚌𝚔 𝚖𝚞𝚜𝚒𝚌 𝚝𝚑𝚊𝚝 𝚜𝚘𝚞𝚗𝚍𝚜 𝚐𝚎𝚗𝚎𝚛𝚒𝚌, 𝚜𝚙𝚎𝚗𝚍 𝚠𝚎𝚎𝚔𝚜 𝚌𝚘𝚘𝚛𝚍𝚒𝚗𝚊𝚝𝚒𝚗𝚐 𝚠𝚒𝚝𝚑 𝚊 𝚌𝚘𝚖𝚙𝚘𝚜𝚎𝚛, 𝚘𝚛 𝚜𝚒𝚖𝚙𝚕𝚢 𝚜𝚑𝚒𝚙 𝚝𝚑𝚎 𝚌𝚘𝚗𝚝𝚎𝚗𝚝 𝚠𝚒𝚝𝚑 𝚗𝚘 𝚊𝚞𝚍𝚒𝚘.

𝙽𝚘𝚗𝚎 𝚘𝚏 𝚝𝚑𝚎𝚜𝚎 𝚘𝚙𝚝𝚒𝚘𝚗𝚜 𝚊𝚛𝚎 𝚐𝚘𝚘𝚍 𝚎𝚗𝚘𝚞𝚐𝚑. 𝙰 𝚖𝚒𝚍-𝚜𝚒𝚣𝚎𝚍 𝚌𝚘𝚖𝚙𝚊𝚗𝚢 𝚙𝚛𝚘𝚍𝚞𝚌𝚎𝚜 𝚍𝚘𝚣𝚎𝚗𝚜 𝚘𝚏 𝚒𝚗𝚝𝚎𝚛𝚗𝚊𝚕 𝚟𝚒𝚍𝚎𝚘𝚜, 𝚌𝚕𝚒𝚎𝚗𝚝-𝚏𝚊𝚌𝚒𝚗𝚐 𝚙𝚛𝚎𝚜𝚎𝚗𝚝𝚊𝚝𝚒𝚘𝚗𝚜, 𝚊𝚗𝚍 𝚜𝚘𝚌𝚒𝚊𝚕 𝚖𝚎𝚍𝚒𝚊 𝚊𝚜𝚜𝚎𝚝𝚜 𝚎𝚟𝚎𝚛𝚢 𝚖𝚘𝚗𝚝𝚑. 𝙴𝚊𝚌𝚑 𝚘𝚗𝚎 𝚒𝚜 𝚊𝚗 𝚘𝚙𝚙𝚘𝚛𝚝𝚞𝚗𝚒𝚝𝚢 𝚝𝚘 𝚌𝚘𝚖𝚖𝚞𝚗𝚒𝚌𝚊𝚝𝚎 𝚠𝚒𝚝𝚑 𝚎𝚖𝚘𝚝𝚒𝚘𝚗 𝚊𝚗𝚍 𝚖𝚘𝚜𝚝 𝚜𝚚𝚞𝚊𝚗𝚍𝚎𝚛 𝚒𝚝 𝚋𝚎𝚌𝚊𝚞𝚜𝚎 𝚘𝚛𝚒𝚐𝚒𝚗𝚊𝚕 𝚖𝚞𝚜𝚒𝚌 𝚏𝚎𝚎𝚕𝚜 𝚘𝚞𝚝 𝚘𝚏 𝚛𝚎𝚊𝚌𝚑.

𝙶𝚘𝚘𝚐𝚕𝚎 𝙶𝚎𝚖𝚒𝚗𝚒 𝙻𝚢𝚛𝚒𝚊 𝟹 𝚜𝚘𝚕𝚟𝚎𝚜 𝚝𝚑𝚎𝚜𝚎 𝚙𝚛𝚘𝚋𝚕𝚎𝚖𝚜. 𝚃𝚢𝚙𝚎 𝚊 𝚜𝚎𝚗𝚝𝚎𝚗𝚌𝚎 𝚍𝚎𝚜𝚌𝚛𝚒𝚋𝚒𝚗𝚐 𝚢𝚘𝚞𝚛 𝚋𝚛𝚊𝚗𝚍, 𝚢𝚘𝚞𝚛 𝚖𝚎𝚜𝚜𝚊𝚐𝚎, 𝚘𝚛 𝚢𝚘𝚞𝚛 𝚖𝚘𝚘𝚍. 𝙸𝚗 𝚞𝚗𝚍𝚎𝚛 𝟹𝟶 𝚜𝚎𝚌𝚘𝚗𝚍𝚜, 𝚢𝚘𝚞 𝚑𝚊𝚟𝚎 𝚊𝚗 𝚘𝚛𝚒𝚐𝚒𝚗𝚊𝚕, 𝟹𝟶-𝚜𝚎𝚌𝚘𝚗𝚍, 𝟺𝟾 𝚔𝙷𝚣 𝚜𝚝𝚎𝚛𝚎𝚘 𝚝𝚛𝚊𝚌𝚔 𝚠𝚒𝚝𝚑 𝚟𝚘𝚌𝚊𝚕𝚜, 𝚊𝚞𝚝𝚘-𝚐𝚎𝚗𝚎𝚛𝚊𝚝𝚎𝚍 𝚕𝚢𝚛𝚒𝚌𝚜, 𝚊𝚗𝚍 𝚌𝚞𝚜𝚝𝚘𝚖 𝚌𝚘𝚟𝚎𝚛 𝚊𝚛𝚝. 𝙽𝚘 𝚌𝚘𝚖𝚙𝚘𝚜𝚎𝚛. 𝙽𝚘 𝚕𝚒𝚌𝚎𝚗𝚜𝚒𝚗𝚐. 𝙽𝚘 𝚙𝚛𝚘𝚖𝚙𝚝 𝚎𝚗𝚐𝚒𝚗𝚎𝚎𝚛𝚒𝚗𝚐 𝚍𝚎𝚐𝚛𝚎𝚎 𝚛𝚎𝚚𝚞𝚒𝚛𝚎𝚍.

𝙷𝚘𝚠 𝚝𝚘 𝚊𝚌𝚌𝚎𝚜𝚜: 𝚐𝚎𝚖𝚒𝚗𝚒.𝚐𝚘𝚘𝚐𝚕𝚎.𝚌𝚘𝚖

𝙻𝚢𝚛𝚒𝚊 𝟹 𝚌𝚊𝚗 𝚑𝚎𝚕𝚙 𝚢𝚘𝚞:

  • 𝙶𝚎𝚗𝚎𝚛𝚊𝚝𝚎 𝚘𝚛𝚒𝚐𝚒𝚗𝚊𝚕 𝚜𝚘𝚞𝚗𝚍𝚝𝚛𝚊𝚌𝚔𝚜 𝚏𝚘𝚛 𝚝𝚛𝚊𝚒𝚗𝚒𝚗𝚐 𝚟𝚒𝚍𝚎𝚘𝚜 𝚊𝚗𝚍 𝚒𝚗𝚝𝚎𝚛𝚗𝚊𝚕 𝚌𝚘𝚖𝚖𝚞𝚗𝚒𝚌𝚊𝚝𝚒𝚘𝚗𝚜 — 𝚗𝚘 𝚕𝚒𝚌𝚎𝚗𝚜𝚒𝚗𝚐 𝚏𝚎𝚎𝚜.
  • 𝙲𝚛𝚎𝚊𝚝𝚎 𝚌𝚞𝚜𝚝𝚘𝚖 𝚓𝚒𝚗𝚐𝚕𝚎𝚜 𝚏𝚘𝚛 𝚙𝚛𝚘𝚍𝚞𝚌𝚝 𝚕𝚊𝚞𝚗𝚌𝚑𝚎𝚜, 𝚎𝚟𝚎𝚗𝚝𝚜, 𝚊𝚗𝚍 𝚜𝚘𝚌𝚒𝚊𝚕 𝚌𝚘𝚗𝚝𝚎𝚗𝚝 𝚒𝚗 𝚜𝚎𝚌𝚘𝚗𝚍𝚜.
  • 𝙿𝚑𝚘𝚝𝚘 𝚝𝚘 𝚖𝚞𝚜𝚒𝚌: 𝚃𝚞𝚛𝚗 𝚊 𝚙𝚑𝚘𝚝𝚘 𝚘𝚛 𝚌𝚊𝚖𝚙𝚊𝚒𝚐𝚗 𝚒𝚖𝚊𝚐𝚎 𝚍𝚒𝚛𝚎𝚌𝚝𝚕𝚢 𝚒𝚗𝚝𝚘 𝚊 𝚖𝚘𝚘𝚍-𝚖𝚊𝚝𝚌𝚑𝚎𝚍 𝚊𝚞𝚍𝚒𝚘 𝚝𝚛𝚊𝚌𝚔.

𝙴𝚡𝚊𝚖𝚙𝚕𝚎: 𝙷𝚎𝚛𝚎’𝚜 𝚑𝚘𝚠 𝚝𝚘 𝚞𝚜𝚎 𝙻𝚢𝚛𝚒𝚊 𝟹 𝚊𝚌𝚛𝚘𝚜𝚜 𝚝𝚑𝚛𝚎𝚎 𝚛𝚎𝚊𝚕 𝚖𝚘𝚖𝚎𝚗𝚝𝚜:

1. 𝚃𝚑𝚎 𝚝𝚛𝚊𝚒𝚗𝚒𝚗𝚐 𝚟𝚒𝚍𝚎𝚘 𝚝𝚑𝚊𝚝 𝚏𝚒𝚗𝚊𝚕𝚕𝚢 𝚜𝚘𝚞𝚗𝚍𝚜 𝚒𝚗𝚝𝚎𝚗𝚝𝚒𝚘𝚗𝚊𝚕: 𝚃𝚑𝚎 𝚝𝚎𝚊𝚖 𝚘𝚙𝚎𝚗𝚜 𝙶𝚎𝚖𝚒𝚗𝚒, 𝚜𝚎𝚕𝚎𝚌𝚝𝚜 m𝚞𝚜𝚒𝚌, 𝚊𝚗𝚍 𝚝𝚢𝚙𝚎𝚜: “𝙿𝚛𝚘𝚏𝚎𝚜𝚜𝚒𝚘𝚗𝚊𝚕, 𝚏𝚘𝚌𝚞𝚜𝚎𝚍 𝚋𝚊𝚌𝚔𝚐𝚛𝚘𝚞𝚗𝚍 𝚖𝚞𝚜𝚒𝚌 𝚏𝚘𝚛 𝚊 𝚌𝚘𝚖𝚙𝚕𝚒𝚊𝚗𝚌𝚎 𝚝𝚛𝚊𝚒𝚗𝚒𝚗𝚐 𝚟𝚒𝚍𝚎𝚘. 𝙽𝚎𝚞𝚝𝚛𝚊𝚕, 𝚖𝚘𝚍𝚎𝚛𝚗, 𝚜𝚕𝚒𝚐𝚑𝚝𝚕𝚢 𝚞𝚙𝚋𝚎𝚊𝚝. 𝙸𝚗𝚜𝚝𝚛𝚞𝚖𝚎𝚗𝚝𝚊𝚕 𝚘𝚗𝚕𝚢, 𝚗𝚘 𝚟𝚘𝚌𝚊𝚕𝚜.” 𝙸𝚗 𝟹𝟶 𝚜𝚎𝚌𝚘𝚗𝚍𝚜 𝚝𝚑𝚎𝚢 𝚑𝚊𝚟𝚎 𝚊 𝚞𝚗𝚒𝚚𝚞𝚎 𝚝𝚛𝚊𝚌𝚔.

𝚃𝚑𝚎𝚗 𝚏𝚘𝚕𝚕𝚘𝚠 𝚞𝚙: “𝙼𝚊𝚔𝚎 𝚒𝚝 𝚜𝚕𝚒𝚐𝚑𝚝𝚕𝚢 𝚠𝚊𝚛𝚖𝚎𝚛.” 𝙾𝚗𝚎 𝚖𝚘𝚛𝚎 𝚐𝚎𝚗𝚎𝚛𝚊𝚝𝚒𝚘𝚗. 𝙳𝚘𝚗𝚎.

𝟸. 𝚃𝚑𝚎 𝚝𝚘𝚠𝚗 𝚑𝚊𝚕𝚕 𝚝𝚑𝚊𝚝 𝚗𝚎𝚎𝚍𝚎𝚍 𝚊𝚗 𝚎𝚗𝚝𝚛𝚊𝚗𝚌𝚎: 𝚃𝚑𝚎 𝙲𝙴𝙾’𝚜 𝚚𝚞𝚊𝚛𝚝𝚎𝚛𝚕𝚢 𝚊𝚕𝚕-𝚑𝚊𝚗𝚍𝚜 𝚑𝚊𝚜 𝚊 𝚜𝚕𝚒𝚍𝚎 𝚍𝚎𝚌𝚔, 𝚊 𝚜𝚝𝚛𝚘𝚗𝚐 𝚊𝚐𝚎𝚗𝚍𝚊, 𝚊𝚗𝚍 𝚣𝚎𝚛𝚘 𝚊𝚞𝚍𝚒𝚘 𝚒𝚍𝚎𝚗𝚝𝚒𝚝𝚢. 𝙿𝚛𝚘𝚖𝚙𝚝: “𝙰𝚗 𝚎𝚗𝚎𝚛𝚐𝚒𝚜𝚒𝚗𝚐, 𝚌𝚒𝚗𝚎𝚖𝚊𝚝𝚒𝚌 𝟹𝟶-𝚜𝚎𝚌𝚘𝚗𝚍 𝚘𝚙𝚎𝚗𝚎𝚛 𝚏𝚘𝚛 𝚊 𝚌𝚘𝚖𝚙𝚊𝚗𝚢 𝚝𝚘𝚠𝚗 𝚑𝚊𝚕𝚕. 𝙼𝚘𝚍𝚎𝚛𝚗, 𝚌𝚘𝚗𝚏𝚒𝚍𝚎𝚗𝚝, 𝚋𝚞𝚒𝚕𝚍𝚜 𝚝𝚘 𝚊 𝚌𝚛𝚎𝚜𝚌𝚎𝚗𝚍𝚘. 𝙽𝚘 𝚕𝚢𝚛𝚒𝚌𝚜”. 𝚃𝚑𝚎 𝚝𝚛𝚊𝚌𝚔 𝚙𝚕𝚊𝚢𝚜 𝚊𝚜 𝚎𝚖𝚙𝚕𝚘𝚢𝚎𝚎𝚜 𝚓𝚘𝚒𝚗 𝚝𝚑𝚎 𝚌𝚊𝚕𝚕.

𝟹. 𝚃𝚑𝚎 𝚙𝚛𝚘𝚍𝚞𝚌𝚝 𝚕𝚊𝚞𝚗𝚌𝚑 𝚛𝚎𝚎𝚕 𝚝𝚑𝚊𝚝 𝚌𝚊𝚖𝚎 𝚝𝚘𝚐𝚎𝚝𝚑𝚎𝚛 𝚕𝚊𝚜𝚝 𝚖𝚒𝚗𝚞𝚝𝚎: 𝙼𝚊𝚛𝚔𝚎𝚝𝚒𝚗𝚐 𝚑𝚊𝚜 𝟺𝟾 𝚑𝚘𝚞𝚛𝚜 𝚋𝚎𝚏𝚘𝚛𝚎 𝚊 𝚙𝚛𝚘𝚍𝚞𝚌𝚝 𝚐𝚘𝚎𝚜 𝚕𝚒𝚟𝚎. 𝚃𝚑𝚎 𝚌𝚊𝚖𝚙𝚊𝚒𝚐𝚗 𝚟𝚒𝚍𝚎𝚘 𝚒𝚜 𝚛𝚎𝚊𝚍𝚢 𝚋𝚞𝚝 𝚝𝚑𝚎 𝚕𝚒𝚌𝚎𝚗𝚜𝚎𝚍 𝚝𝚛𝚊𝚌𝚔 𝚝𝚑𝚎𝚢 𝚠𝚊𝚗𝚝𝚎𝚍 𝚌𝚘𝚜𝚝𝚜 $𝟾𝟶𝟶 𝚊𝚗𝚍 𝚝𝚊𝚔𝚎𝚜 𝚝𝚑𝚛𝚎𝚎 𝚍𝚊𝚢𝚜 𝚝𝚘 𝚌𝚕𝚎𝚊𝚛. 𝙸𝚗𝚜𝚝𝚎𝚊𝚍, 𝚝𝚑𝚎 𝚖𝚊𝚛𝚔𝚎𝚝𝚒𝚗𝚐 𝚖𝚊𝚗𝚊𝚐𝚎𝚛 𝚞𝚙𝚕𝚘𝚊𝚍𝚜 𝚝𝚑𝚎 𝚑𝚎𝚛𝚘 𝚌𝚊𝚖𝚙𝚊𝚒𝚐𝚗 𝚒𝚖𝚊𝚐𝚎 𝚝𝚘 𝙶𝚎𝚖𝚒𝚗𝚒 𝚊𝚗𝚍 𝚙𝚛𝚘𝚖𝚙𝚝𝚜: “𝙲𝚛𝚎𝚊𝚝𝚎 𝚊 𝚝𝚛𝚊𝚌𝚔 𝚋𝚊𝚜𝚎𝚍 𝚘𝚗 𝚝𝚑𝚒𝚜 𝚒𝚖𝚊𝚐𝚎. 𝙱𝚘𝚕𝚍, 𝚊𝚜𝚙𝚒𝚛𝚊𝚝𝚒𝚘𝚗𝚊𝚕, 𝚖𝚒𝚕𝚕𝚎𝚗𝚗𝚒𝚊𝚕 𝚎𝚗𝚎𝚛𝚐𝚢, 𝚜𝚝𝚛𝚘𝚗𝚐 𝚏𝚎𝚖𝚊𝚕𝚎 𝚟𝚘𝚌𝚊𝚕𝚜.”

𝙻𝚢𝚛𝚒𝚊 𝟹 𝚛𝚎𝚊𝚍𝚜 𝚝𝚑𝚎 𝚟𝚒𝚜𝚞𝚊𝚕- 𝚋𝚛𝚒𝚐𝚑𝚝 𝚌𝚘𝚕𝚘𝚞𝚛𝚜, 𝚏𝚘𝚛𝚠𝚊𝚛𝚍 𝚖𝚘𝚝𝚒𝚘𝚗 𝚊𝚗𝚍 𝚍𝚎𝚕𝚒𝚟𝚎𝚛𝚜 𝚊 𝚝𝚛𝚊𝚌𝚔 𝚝𝚑𝚊𝚝 𝚏𝚎𝚎𝚕𝚜 𝚖𝚊𝚍𝚎 𝚏𝚘𝚛 𝚝𝚑𝚎 𝚟𝚒𝚍𝚎𝚘. 𝙱𝚎𝚌𝚊𝚞𝚜𝚎 𝚒𝚝 𝚠𝚊𝚜.

𝚆𝚑𝚊𝚝 𝚖𝚊𝚔𝚎𝚜 𝙻𝚢𝚛𝚒𝚊 𝟹 𝚜𝚙𝚎𝚌𝚒𝚊𝚕?

𝙸𝚝 𝚕𝚒𝚟𝚎𝚜 𝚒𝚗𝚜𝚒𝚍𝚎 𝚢𝚘𝚞𝚛 𝚎𝚡𝚒𝚜𝚝𝚒𝚗𝚐 𝚠𝚘𝚛𝚔𝚏𝚕𝚘𝚠, 𝚗𝚘𝚝 𝚘𝚞𝚝𝚜𝚒𝚍𝚎 𝚒𝚝: 𝚄𝚗𝚕𝚒𝚔𝚎 𝚂𝚞𝚗𝚘, 𝚄𝚍𝚒𝚘, 𝚘𝚛 𝚊𝚗𝚢 𝚘𝚝𝚑𝚎𝚛 𝚜𝚝𝚊𝚗𝚍𝚊𝚕𝚘𝚗𝚎 𝚖𝚞𝚜𝚒𝚌 𝚝𝚘𝚘𝚕, 𝙻𝚢𝚛𝚒𝚊 𝟹 𝚒𝚜 𝚋𝚞𝚒𝚕𝚝 𝚍𝚒𝚛𝚎𝚌𝚝𝚕𝚢 𝚒𝚗𝚝𝚘 𝙶𝚎𝚖𝚒𝚗𝚒, 𝚝𝚑𝚎 𝚜𝚊𝚖𝚎 𝚠𝚒𝚗𝚍𝚘𝚠 𝚠𝚑𝚎𝚛𝚎 𝚢𝚘𝚞’𝚛𝚎 𝚊𝚕𝚛𝚎𝚊𝚍𝚢 𝚠𝚛𝚒𝚝𝚒𝚗𝚐, 𝚛𝚎𝚜𝚎𝚊𝚛𝚌𝚑𝚒𝚗𝚐, 𝚊𝚗𝚍 𝚌𝚛𝚎𝚊𝚝𝚒𝚗𝚐.

𝙽𝚘 m𝚞𝚜𝚒𝚌 k𝚗𝚘𝚠𝚕𝚎𝚍𝚐𝚎 r𝚎𝚚𝚞𝚒𝚛𝚎𝚍: 𝙶𝚎𝚗𝚛𝚎, 𝚝𝚎𝚖𝚙𝚘, 𝚟𝚘𝚌𝚊𝚕 𝚜𝚝𝚢𝚕𝚎 — 𝚊𝚕𝚕 𝚍𝚒𝚛𝚎𝚌𝚝𝚎𝚍 𝚒𝚗 𝚙𝚕𝚊𝚒𝚗 𝙴𝚗𝚐𝚕𝚒𝚜𝚑, 𝚝𝚑𝚎 𝚠𝚊𝚢 𝚢𝚘𝚞’𝚍 𝚋𝚛𝚒𝚎𝚏 𝚊 𝚌𝚘𝚕𝚕𝚎𝚊𝚐𝚞𝚎.

𝙱𝚞𝚒𝚕𝚝-𝚒𝚗 𝙰𝙸 t𝚛𝚊𝚗𝚜𝚙𝚊𝚛𝚎𝚗𝚌𝚢: 𝙴𝚟𝚎𝚛𝚢 𝚝𝚛𝚊𝚌𝚔 𝚒𝚜 𝚊𝚞𝚝𝚘𝚖𝚊𝚝𝚒𝚌𝚊𝚕𝚕𝚢 𝚠𝚊𝚝𝚎𝚛𝚖𝚊𝚛𝚔𝚎𝚍 𝚠𝚒𝚝𝚑 𝚂𝚢𝚗𝚝𝚑𝙸𝙳- 𝙶𝚘𝚘𝚐𝚕𝚎’𝚜 𝚒𝚗𝚟𝚒𝚜𝚒𝚋𝚕𝚎 𝚊𝚞𝚍𝚒𝚘 𝚏𝚒𝚗𝚐𝚎𝚛𝚙𝚛𝚒𝚗𝚝, 𝚜𝚘 𝙰𝙸-𝚐𝚎𝚗𝚎𝚛𝚊𝚝𝚎𝚍 𝚌𝚘𝚗𝚝𝚎𝚗𝚝 𝚒𝚜 𝚊𝚕𝚠𝚊𝚢𝚜 𝚒𝚍𝚎𝚗𝚝𝚒𝚏𝚒𝚊𝚋𝚕𝚎, 𝚎𝚟𝚎𝚗 𝚊𝚏𝚝𝚎𝚛 𝚌𝚘𝚖𝚙𝚛𝚎𝚜𝚜𝚒𝚘𝚗 𝚘𝚛 𝚛𝚎-𝚛𝚎𝚌𝚘𝚛𝚍𝚒𝚗𝚐.

𝙽𝚘𝚝𝚎: 𝚃𝚑𝚎 𝚝𝚘𝚘𝚕𝚜 𝚊𝚗𝚍 𝚊𝚗𝚊𝚕𝚢𝚜𝚒𝚜 𝚏𝚎𝚊𝚝𝚞𝚛𝚎𝚍 𝚒𝚗 𝚝𝚑𝚒𝚜 𝚜𝚎𝚌𝚝𝚒𝚘𝚗 𝚍𝚎𝚖𝚘𝚗𝚜𝚝𝚛𝚊𝚝𝚎𝚍 𝚌𝚕𝚎𝚊𝚛 𝚟𝚊𝚕𝚞𝚎 𝚋𝚊𝚜𝚎𝚍 𝚘𝚗 𝚘𝚞𝚛 𝚒𝚗𝚝𝚎𝚛𝚗𝚊𝚕 𝚝𝚎𝚜𝚝𝚒𝚗𝚐. 𝙾𝚞𝚛 𝚛𝚎𝚌𝚘𝚖𝚖𝚎𝚗𝚍𝚊𝚝𝚒𝚘𝚗𝚜 𝚊𝚛𝚎 𝚎𝚗𝚝𝚒𝚛𝚎𝚕𝚢 𝚒𝚗𝚍𝚎𝚙𝚎𝚗𝚍𝚎𝚗𝚝 𝚊𝚗𝚍 𝚗𝚘𝚝 𝚒𝚗𝚏𝚕𝚞𝚎𝚗𝚌𝚎𝚍 𝚋𝚢 𝚝𝚑𝚎 𝚝𝚘𝚘𝚕 𝚌𝚛𝚎𝚊𝚝𝚘𝚛𝚜.

AI BITS & BYTES

Why slower adoption lowers displacement risk

Countering last week’s Citrini Research post which rattled IT stocks specifically, a Citadel Securities note argues that technological diffusion has historically followed an S-curve. Early adoption is slow and expensive. Growth accelerates as costs fall, and complementary infrastructure develops. Eventually, saturation sets in, and marginal adopter is less productive or less profitable which causes growth to decelerate.

Despite this, markets often extrapolate the acceleration phase linearly but history implies pace of adoption plateaus as organisational integration is costly, regulation emerges and diminishing marginal returns exist in economic deployment. The risk of displacement declines with a slower pace of adoption.

View full Image
Job postings for software engineers are rising.
View full Image
AI adoption trends do not look linear.

Did Grok predict the US attack on Iran? Elon Musk’s chatbot uniquely predicted the date of the US strike on Iran as 28 February, even prior to the attack. The prediction was made by the AI chatbot in a methodological test on 25 February.

Iran war puts spotlight on Big Tech’s AI bets in West Asia: Countries in West Asia are investing heavily in AI, semiconductor partnerships and cloud computing as part of broader transformation plans designed to attract foreign capital and build domestic technology ecosystems. Here are some of the Big Tech AI plans in the region.

Elon Musk’s X cracks down on AI deepfakes on Iran war: X Product Head Nikita Bier said that the company is trying to maintain access to authentic information on the ground in times of war. This has led X to now warn creators that they will be suspended from the revenue-sharing for 90 days if they post AI-generated videos of an armed conflict without adding a disclosure.

India adopting pragmatic approach to regulate AI: IAIRO

India has adopted a balanced and pragmatic approach on AI, avoiding the extremes of lightly regulated US model and heavily compliance-driven framework of the EU, said an Indian AI Research Organisation official. IAIRO Founding Director Amit Sheth said India has adopted a balanced regulatory approach which enables innovation while safeguarding users.

Hope you folks have a great weekend, and your feedback will be much appreciated— just reply to this mail, and I’ll respond. Want this newsletter delivered straight to your inbox? Subscribe here.

About the Author

Leslie D'Monte, author of "AI Rising", is a tech and science writer with stints at top media houses. An MIT-Knight Fellow and TEDx speaker, he covers AI, deeptech, and digital policy, curates tech events, and hosts podcasts and Mint's Tech Talk newsletter.

Get Latest real-time updates

Catch all the Business News, Market News, Breaking News Events and Latest News Updates on Live Mint. Download The Mint News App to get Daily Market Updates.

HomeNewslettersTech TalkAI & War: Principle, Profit and Agentic AI combat
More