
India must wake up on basic R&D for technology before it gets too late

Summary
- We lag on foundational AI model development and adapting other models is a bad idea. India’s future in AI and other emerging technologies will depend on our willingness to invest in the unknown.
Recently, a Chinese artificial intelligence (AI) company unveiled a language model brimming with innovations that make it 15-20 times cheaper than some of the best models globally, while offering comparable capabilities.
Within days, another Chinese firm announced a model capable of handling inputs 20-32 times larger than any current global counterpart—equivalent to processing a 16,000-page pdf file.
These developments pose a sobering question for us: why aren’t such innovations emerging in India?
Also Read: India requires a specialized AI cadre for effective governance of this technology
The AI revolution is unfolding before our eyes, with everyone at a similar starting line in model development a few quarters ago.
On paper, we were well-placed to lead it.
The country boasts of a high number of AI engineers and programmers, a thriving tech ecosystem and a burgeoning pool of risk capital.
In model development, we had no historical handicap (as in chip-making).
The formulas were known to model makers in the US as well as China, Korea, Europe and West Asia.
Yet, the Generative AI landscape in India remains largely derivative.
While some firms fine-tune open-source models for Indian languages or specific applications, foundational breakthroughs—akin to GPT-4, Claude or DeepSeek—are notably absent, and few seem worried.
This raises critical questions about our approach to all sorts of fundamental research, not just GenAI model development.
Our missing foundation: When transformer models first emerged as a concept in AI two or three years ago, India had the talent and resources to dive into foundational research.
Yet, the focus largely remained on applications and adaptations of existing models.
Also Read: AI needs to be human-centred to charm people into adoption
The prevailing view was that building models from scratch was too expensive.
Rather than taking on the cost challenge of making them affordable, we opted to skip the race.
This myopia is perplexing.
The core principles of transformer models are neither esoteric nor inaccessible; they can be illustrated through basic mathematical operations.
Yet, instead of leveraging local talent to explore new architectures or more efficient methods, we chose to build on others’ foundations.
The number of research papers from our experts currently stands at barely a tenth of the number from China or the US.
This has left India perpetually paying for intellectual property—whether in the form of cloud-usage fees, licensing costs or expenses on foreign hardware.
At some point, participating in the global AI race will become as difficult for us as it is currently in leading-edge chip manufacturing.
The cost of incrementalism and our over-focus on the end goal: The global AI race is still young, with no clear winners.
China’s rapid advancements illustrate what’s possible. Its LLMs are not only efficient but also versatile. Impressively, these models seamlessly support multiple languages, including Indian ones.
In contrast, the most laudable Indian efforts largely focus on adapting existing foundational models to support local languages, a strategy that addresses immediate needs but falls short of not just fostering architectural innovation, but also creating products that could enhance our global competitiveness.
Without basic breakthroughs, even India-specific language models risk being overshadowed by foreign alternatives that offer similar capabilities at scale in addition to a vast trove of other abilities.
Globally, innovation centres are investing heavily in basic AI research, massive data-sets and in-house training.
Our application-oriented mindset is rooted in a broader problem: a lack of willingness to fund research without clear end goals.
Also Read: The IT ministry’s AI regulation report is broadly welcome but not entirely
Foundational research, by its nature, is speculative.
It involves trial and error, with no guarantee of success. In AI model development, it is worse.
One may be completely unaware of the full benefits of formula tweaks or other architectural experiments until the model is fully trained.
However, the payoff on success can be transformative. Countries investing in such research today will shape the industries of tomorrow.
Dependency is costly: History offers us a stark lesson: dependency on critical raw materials or technologies can push a nation further behind.
By failing to develop basic capabilities, we risk a paralyzing dependency on foreign tech in high-innovation fields where catching up with leaders may be impossible.
As global AI capabilities expand, using others’ innovations will get costlier. Without ownership control of basic models, adapting them to our unique needs could also be held back.
GenAI, like software, has the potential to be a large net foreign-exchange earner for us, but at today’s pace, it risks becoming a huge forex absorber, like oil.
Indian dependency extends beyond GenAI.
Robotics, autonomous vehicles, AI-driven drug development and other nascent industries are also taking shape.
These fields also require basic research and their leadership opportunities are still open.
But if India stays stuck in its old ways, relying on incremental adaptations rather than going in for bold exploratory research, we may fall behind in these industries too.
We must pivot quickly: Transformative breakthroughs often emerge from uncertain beginnings.
We need financial backing for efforts that have no clear end-goal, as long as they involve bold experimentation and foundational research.
India’s future in AI—and in many other emerging technologies—depends on visionary leaders willing to invest in the unknown.
The author is a Singapore-based innovation investor for LC GenInnov Fund.