OpenAI, the creator of foundational artificial intelligence behind ChatGPT, announced on Friday that it has signed a deal with the Pentagon. It allows the US department of war (DoW), formerly the department of defense, to use the Sam Altman-led company's models in classified environments.
That has raised eyebrows and questions over the use of the OpenAI technology stack in India, among the largest markets for AI and the fastest adopter of the technology worldwide. An expert columnist in Mint recently flagged pointed to how deployment boundaries of AI systems could change as they scale.
What, then, are the terms of the OpenAI-DoW deal? Will this impact Indian AI startups? What does this mean for AI governance worldwide? Mint explains.
What is the OpenAI, US DoW deal?
OpenAI, while allowing the use of its model in "classified environments", has set three key red lines as part of the contract with the DoW:
- Their technology cannot be used for mass surveillance programmes in the US.
- Their technology cannot be used to direct autonomous weapon systems.
- No high-stakes decisions can be automated, like systems used to create social credit systems like those in China.
Red lines are specific prohibitions on AI use for behaviours or use cases that are deemed too dangerous to allow. Most foundation model companies have unofficially agreed to common red lines.
According to OpenAI's statement, the company is not providing the DoW with “guardrails off” models, meaning the company's safety standards and boundaries will continue to be enforced. OpenAI is also not providing the technology on the edge–implying that that the ChatGPT-maker's AI models will not be used by DoW in mobile phones and individual devices. Theoretically, this means that DoW will not deploy OpenAI-based surveillance on a dissenter’s phone.
Why did OpenAI sign the deal?
OpenAI's deal came hours after the DoW's deal with Anthropic collapsed.
According to Altman, the founder of OpenAI, the company went ahead to “de-escalate” things between DoW and American AI labs. “If we are right and this does lead to a de-escalation between the DoW and the industry, we will look like geniuses, and a company that took on a lot of pain to do things to help the industry. If not, we will continue to be characterized as rushed and uncareful,” said Altman on X (previously Twitter).
The de-escalation he referred to was the standoff between the US government and Anthropic, primarily because the DoW didn't provide clarity on how it would use the company’s foundational models.
Shortly after the deal fell through, US President Donald Trump and defence secretary Pete Hegseth terminated all existing deals with Anthropic and labelled the company as a supply-chain risk, a term usually used for foreign companies that are perceived as a security risk. In the past, the US tagged China-linked Huawei and cybersecurity provider Kaspersky, which has links to Russia, as security risks.
What does this mean for the Indian AI ecosystem?
Many of the world's top foundational AI labs have emerged from the US, including OpenAI, Anthropic, voice-first company ElevenLabs as well as xAI, Meta AI and Google's DeepMind. Companies outside the US include France-based Mistral, China's DeepSeek, and Indian sovereign models from Sarvam, BharatGen and Ola founder Bhavish Aggarwal's Krutrim.
In India, sovereign AI systems have long been the subject of debate: do we need to build our own, or should we focus on building applications that can be scaled easily and with less capital?
So far, India has lagged in developing a foundational model, as these businesses are capital-intensive and require high-end graphics processing units, like those sourced from US-based Nvidia.
For now, Indian labs like Sarvam and BharatGen are developing smaller, task-specific models compared to large language models (LLMs) with hundreds of billions or in the trillions range of parameters.
Parameters represent the total size of the database on which an AI model was built on.
For example, BharatGen is a 17-billion-parameter multilingual model trained on Indian data and would generally be classified as a small language model (SLM), rather than an LLM.
That said, the Indian models are growing fast. Sarvam launched its 105-billion parameter model on 18 Feb; BharatGen said told Mint in September 2025 that it will target a trillion-parameter model.
Are there any threats for Indian AI startups?
Deployment of any product or technology in defence is not a full swallow of it by armed forces. Just like with any other customer doesn't get full access to all capabilities or stacks of a product. Therefore, it will be erroneous to assume that just because OpenAI may provide its AI models to the US defence department, all of its models will be compromised—and startups in India or anywhere else need not be alarmed on that count.
No single AI platform is 'truly' sovereign. For instance, Google provides sovereign AI models in India that are stored in local servers and its learning and processing databases are air-gapped from other countries so that all statistics remain within Indian geographies. Google's tools, at the same time, are also widely used by government bodies in the US.
That said, all models are inherently connected to global cloud infrastructure for real-time information updates. Even before the DoW deal, companies have had concerns of data privacy and usage guardrails—and that will continue. Regulators the world over, especially Europe, have their task cut out on this count.
India, meanwhile, will continue its AI push with a pace of innovation based on its capital availability and different use cases.