New Delhi: Sanjeev Sanyal, a member of the Prime Minister's economic advisory council (PM-EAC), says India should have a specialist AI regulator with a broad mandate, along with a national registry of algorithms and a “repository of national algorithms for innovation of AI".
There was a need for such a regulatory framework amid extreme approaches being taken by global economies, he said in a research paper published by PMEAC that suggests ways to regulate AI
Sanyal said traditional methods fall short due to the non-linear and unpredictable nature of AI. Current regulatory approaches typically rely on ex-ante impact analysis and risk assessment and therefore face challenges in effectively governing AI.
The paper, titled 'A Complex Adaptive System Framework to Regulate Artificial Intelligence and written by Sanjeev Sanyal, Pranav Sharma and Chirag Dudani, proposes a framework based on CAS (Complex Adaptive System) thinking, consisting of five key principles.
These include establishing guardrails and partitions to limit undesirable AI behaviour, mandating manual overrides and authorization chokepoints where critical infrastructure will remain in human controls at key stakes for active intervention.
The principles also include open licencing of core algorithms and continuous monitoring of AI systems for ensuring transparency, accountability and explainability, while mandating incident reporting protocols to document system aberrations or failures, that will define clear lines of AI accountability and ensure 'skin in the game' by holding individuals or developers responsible.
The key pillars have been suggested after considering approaches taken by other countries.
The US and UK, for instance, have taken a hands-off or self-regulatory approach, the paper notes, as opposed to the heavily state-regulated approach adopted by China.
India has offered to lead the development of a draft global artificial intelligence (AI) regulatory framework, which will be discussed and debated at the GPAI (Global Partnership on Artificial Intelligence) Summit, sometime in June or July.
The GPAI is a grouping of 29 nations including the European Union that in December last year adopted the New Delhi Declaration where countries agreed to use the GPAI platform to create a global framework on AI trust and safety, within six months.
The countries would also collaboratively develop AI applications in healthcare and agriculture, as well as include the needs of the Global South in the development of AI.
Against that backdrop, the research paper by the PM-EAC member suggests open licencing of core algorithms for external audits, AI factsheets, and continuous monitoring of AI systems, are crucial for accountability, apart from periodic mandatory audits for transparency and explainability.
"Implement clear boundary conditions to limit undesirable AI behaviour. This includes creating partition walls between distinct systems and within deep learning AI models to prevent systemic failures, similar to firebreaks in forests," the paper noted.
It added that manual overrides empower humans to intervene when AI systems behave erratically or create pathways to cross-pollinate partitions. Meanwhile, multi-factor authentication authorization protocols provide robust checks before executing high-risk actions, requiring consensus from multiple credentialed humans.
Among the principles, establishing predefined liability protocols to ensure that entities or individuals are held accountable for AI-related malfunctions or unintended outcomes, could put the onus on Big Tech, even though the paper does not explicitly say so.
The paper however highlighted that "this proactive stance inserts an ex-ante 'Skin in the Game,' ensuring that system developers and operators remain deeply invested and accountable for AI outcomes."
Sanyal also suggested the creation of a dedicated, agile, and expert regulatory body for AI with a broad mandate and the ability to respond swiftly, as traditional regulatory mechanisms often lag the rapid pace of AI evolution, thus ensuring that governance remains proactive and effective.
Experts said that regulations to supervise AI have to strike a balance between fostering innovation and ensuring responsible AI development.
Kazim Rizvi, founder of one of India's leading tech policy think tanks, The Dialogue, said the formulation of AI regulation in India will be a complex endeavour which will require careful consideration to ensure responsible and ethical development and deployment of AI technologies.
"The PMEAC's paper proposing a 'Complex Adaptive System Framework to regulate AI' offers valuable insights, some of which resonate with the principles outlined in The Dialogue's work on 'Trustworthy AI'. These principles, such as transparency, explainability, accountability, and fairness, are pivotal for fostering trust in AI systems and aligning regulatory efforts with global standards," he said.
"The adoption of globally accepted principles of trustworthy AI will not only enhance India's competitiveness in the global AI landscape but also facilitate collaboration and knowledge sharing with other countries. By aligning regulatory efforts with international standards, India can position itself as a leader in responsible AI development and contribute to the global conversation on AI governance," he added.
A spokesperson of the Ministry of Electronics and Information Technology (Meity) didn't respond to emailed queries.
Rajeev Chandrasekhar, minister of state for information technology and electronics, recently said that the first draft of the AI regulation was expected to be out by June-July.
While it is unclear whether the principles suggested by the PMEAC will be adopted to form the bedrock of regulation for AI in the country, AI regulation will also be a part of the upcoming Digital India Act (DIA) which is expected to be put up for public consultation after the general elections conclude in early June.
As things stand, an inter-ministerial group has been tasked with the responsibility of drafting regulations for AI.
There have also been suggestions of creating a regulatory body consisting of different ministries as members to supervise and regulate AI.
India, which has been actively looking at creating AI capacities in the country, last month approved the ₹10,372 crore India AI Mission, which aims to build a base of graphic processing units (GPUs), multi-modal domain-specific large language models (LLMs), and a unified data platform.
It will also offer an open-source database of non-personal data that can be used to train AI models and market AI applications commercially.
On the regulatory side, it has been issuing advisories to platforms to ensure their AI products or tools should not “threaten the integrity of the electoral process" ahead of the national elections.
Last November, the Centre asked Big Tech firms and social media companies to take down deep fake content within 24 hours of a complaint.
The government referred to deep fakes as synthetic media created by using AI tools and a major violation of the safety and trust of digital citizens.
The government has also insisted that social media platforms need to be more proactive considering the damage caused by deepfake content can be immediate, and even a slightly delayed response may not be effective.