India's deepfake fight is a tricky battle

A quick fix is easier said than done.

Leslie D'Monte
Published14 Feb 2026, 12:05 PM IST
India is pushing the regulatory envelope on deepfakes even as it does not have specific AI laws. (Mint)
India is pushing the regulatory envelope on deepfakes even as it does not have specific AI laws. (Mint)

The likes of X.com to Instagram and Telegram now have less breathing time to take down non-consensual intimate imagery and deepfake content in India, but that’s an exercise that’s easier said than done.

In this week’s edition of Tech Talk:

  • India’s IT Rules against deepfakes, explained
  • Sarvam AI’s Bulbul V3 hits the right notes in a very Indian key
  • OpenAI Vs Anthropic: The agentic AI battle has just begun

India’s IT Rules against deepfakes, explained

The amended Information Technology (IT) Rules, 2026, requires digital platforms to take down offensive content within two hours of receiving a complaint, as against a 24-hour window earlier. They get three hours in the case of a government or court order, compared with 36 hours earlier.

The government has dropped the idea of mandating large, fixed-size watermarks on AI-generated content, but the rules aimed at curbing the rapid spread of deepfakes and other culturally sensitive content mandate clear labelling of such content.

Intermediaries must ensure that any synthetically generated audio, image, or video that appears authentic is prominently labelled to distinguish it from real content.

In addition, platforms offering AI tools are required to deploy technical safeguards to prevent the creation or dissemination of specified harmful content, including child sexual abuse material (CSAM), content related to explosives, and deepfakes intended to defraud users or impersonate individuals.

The timeline for grievance officers to resolve general user complaints has also been halved to seven days from 15, in a bid to improve accountability.

Global precedents: Governments around the world are stepping in to regulate harmful online content. However, India’s approach stands out for the sheer speed it mandates.

  • The EU’s Digital Services Act requires platforms to act “expeditiously” on illegal content and immediately comply with court orders or notices from trusted flaggers, but it stops short of prescribing fixed takedown timelines.
  • Germany’s NetzDG law is more explicit, giving platforms 24 hours to remove “manifestly illegal” content, while more complex cases can take up to seven days.
  • The UK’s Online Safety Act focuses less on hard deadlines and more on proactive risk mitigation, particularly for deepfakes and child safety.
  • Australia allows its eSafety Commissioner to order the removal of intimate or abhorrent content within 24 hours, and sometimes faster in emergency cases.
  • The United States, by contrast, has no comparable takedown timelines at the federal level, relying instead on a mix of platform policies and state laws.

Easier said than done: As we see, India’s two- and three-hour deadlines figure among the most stringent globally. While the intent to protect users from deepfakes, fraud, and non-consensual imagery is aligned with international norms, the execution raises questions.

Hence, there was no surprise when the draft amendments triggered pushback from big tech companies, with industry bodies such as the Internet and Mobile Association of India (IAMAI) arguing that the rules are overly rigid and technically challenging to implement across platforms, formats, and devices.

To be sure, large platforms do have automated systems to detect CSAM and certain forms of synthetic media, and tighter timelines do force faster responses, but the scale of content, linguistic diversity, and the need for contextual judgement make rapid, error-free enforcement difficult.

The risk is that platforms — faced with punitive timelines — may default to over-compliance. Moreover, Limited time for human review or appeals increases the likelihood of false positives and legitimate content being taken down.

Similarly, mandating prominent labelling of AI-generated content and requiring AI tools to prevent misuse sound reasonable in principle, but enforcing this consistently across formats, edits, and platforms remains technically unresolved worldwide.

Regardless, India is pushing the regulatory envelope even as it does not have specific AI laws. The success of these rules will depend less on the deadlines themselves and more on how enforcement, transparency, and grievance redressal are handled in practice. We will be watching this space closely.

You may read more about the rules here.

AI TOOL OF THE WEEK

by AI&Beyond, with Jaspreet Bindra and Anuj Magazine

The AI capability we unlocked today is based on a tool - PaperBanana.

What problem does it solve? Academic researchers spend countless hours manually creating methodology diagrams, system architectures, and statistical charts for their papers. Even as AI tools help write text faster, figure creation remains a bottleneck — choosing the right visual style, ensuring technical accuracy, and iterating until diagrams meet conference standards.

This “last-mile” problem delays paper submissions and slows down the entire research workflow, especially in AI and ML fields where complex methods demand clear visual explanations.

PaperBanana automates the creation of publication-ready academic illustrations, transforming paper text into conference-grade figures through multi-agent AI collaboration.

How to access: https://paper-banana.org/

PaperBanana can help you:

  • Generate methodology diagrams: Convert your method description into clear pipeline visualisations.
  • Create statistical charts: Automatically produce data plots that match academic standards.
  • Apply conference-specific styles: Get NeurIPS, ICCV, or other conference-appropriate aesthetics.

Example: Suppose you have written a methods section describing your new ML pipeline.

  • Paste Your Text
  • Select Your Style (Methodology Diagrams, Statistical Plots, Aesthetic Enhancements, Educational Infographics, Aesthetic Refinement)
  • Select Aspect Ratio, Resolution
  • Generate: The multi-agent system retrieves visual references, plans layout, and renders your diagram.
  • Refine: Review the figure and use “Refine” to correct specific elements like arrow directions or module labels.
  • Export: Download as PNG and insert directly into your LaTeX manuscript.

What makes PaperBanana special?

  • Multi-Agent Architecture: Five specialized agents collaborate to ensure faithfulness, readability, and aesthetics
  • Benchmark-Validated: Outperforms vanilla prompting and earlier tools on PaperBananaBench metrics
  • End-to-End Automation: Removes the figure-creation bottleneck from research workflows and autonomous AI-scientist pipelines

Note: The tools and analysis featured in this section demonstrated clear value based on our internal testing. Our recommendations are entirely independent and not influenced by the tool creators.

THE AI NEWS FEED

AI BITS & BYTES

Sarvam AI’s Bulbul V3 hits the right notes in a very Indian key

Sarvam AI has released Bulbul V3 as a text-to-speech model designed for production-ready voices for Indian languages.

Sarvam has been identified by the Centre to build India’s first homegrown language model, and voice is a very important piece of the AI puzzle.

Bulbul V3 currently supports more than 35 voices (sourced from professional voice artists) across 11 Indian languages, and plans to soon expand to support 22 Indian languages. Indian speech is complex by default, with people switching languages mid-sentence, accents varying by region, and names, abbreviations, and emotions mattering as much as words.

By understanding context and intent rather than processing words as a simple sequence, Bulbul V3 generates speech that sounds natural and aligns with the emotional content of what is being said. It supports voice cloning, enabling brand-specific voices, consistent character identities, and personalised experiences.

In an independent third-party study conducted by Josh Talks, using a blind A/B human listening test across 11 languages, ElevenLabs v3 alpha led on audio quality in general (full-band) evaluations while Bulbul V3 outperformed Cartesia Sonic-3, v2.5 flash, and other competitors. In the 8 kHz telephony-grade, though, Bulbul V3 was the top performer.

This implies that while Bulbul V3 does compete with the likes of ElevenLabs, Google, and OpenAI’s text-to-speech systems, its real achievement lies in where it chooses to compete. Bulbul V3 is built for Indian languages, accents, and code-mixed speech, the everyday Hinglish-like speech that most global models still handle awkwardly.

View full Image
.

In telephony-grade audio, where clarity, pronunciation stability, and robustness matter more than emotional flourish, Bulbul reportedly performs extremely well. This makes it attractive for call centres, government services, agritech platforms, and healthcare helplines, which are all high-volume, real-world use cases in India.

To be sure, ElevenLabs remains the gold standard for high-fidelity, expressive voice synthesis. Its voices sound richer, more emotive, and more “human” in long-form narration, audiobooks, podcasts, and creative media. It also offers broader global language coverage and finer emotional control.

In pure audio aesthetics, ElevenLabs still has the edge.

Bulbul, on its part, is optimised for 8 kHz telephony, low bandwidth, Indian phonetics, and code-switching that are areas which global models treat as edge cases rather than core design problems.

View full Image
.

Bulbul is also positioned as cheaper and more scalable for Indian enterprises, which matters when millions of voice interactions are involved. For startups and public-sector deployments, this alone is a decisive advantage.

Simply put, Bulbul is not a general-purpose speech model, nor does it challenge global leaders in emotional range, studio-grade output, or cross-cultural versatility. Yet, Bulbul V3 sings for India, and does so unusually well.

OpenAI Vs Anthropic: The Agentic AI battle has just begun

Both OpenAI and Anthropic released new versions of their flagship AI models on the very day that Sarvam released Bulbul.

While GPT-5.3-Codex is being touted as OpenAI’s first model that helped create itself, Anthropic says its latest model represents a fundamental shift in how AI handles complex workplace tasks. The company highlighted use cases including financial modelling that synthesises complicated regulatory filings and market data, plus document and presentation outputs that require minimal refinement.

Anthropic’s launch of Claude Opus 4.6 followed the company’s earlier rollout of 11 agentic tools and plugins that fell Big Tech stocks, with investors fearing that AI could kill traditional software and services models.

Zoho co-founder, Sridhar Vembu, has even warned coders to look for alternative sources of livelihood as AI gets better and better at generating new apps and websites.

But LLMs still can’t reason well. In this context, you may also read why Stanford researchers say that LLMs may excel in benchmarks but still fail reasoning. They posit that LLMs “have exhibited remarkable reasoning capabilities, achieving impressive results across a wide range of tasks. Despite these advances, significant reasoning failures persist, occurring even in seemingly simple scenarios.”

About the Author

Leslie D'Monte, author of "AI Rising", is a tech and science writer with stints at top media houses. An MIT-Knight Fellow and TEDx speaker, he covers ...Read More

Get Latest real-time updates

Catch all the Business News, Market News, Breaking News Events and Latest News Updates on Live Mint. Download The Mint News App to get Daily Market Updates.

Business NewsNewslettersTech TalkIndia's deepfake fight is a tricky battle
More