
Ministry of Electronics and Information Technology (MeitY) unveiled the India AI Governance Guidelines under the IndiaAI mission on Wednesday.
The document states AI governance framework should be ‘human-centric,’ meaning that AI systems must be designed and deployed in ways which “empower individuals and reflect their value systems.” It also notes that “humans should, as far as possible, have final control over AI system,” adding that human oversight is essential to maintaining accountability over the AI systems.
S. Krishnan, Secretary, MeitY, in a release by PIB, said, “Our focus remains on using existing legislation wherever possible. At the heart of it all is human centricity, ensuring AI serves humanity and benefits people’s lives while addressing potential harms.”
The report also puts a particular emphasis on user consent and data transparency. It reads, “The use of personal data without user consent to train AI models is governed by the Digital Personal Data Protection Act.”
The new report states that AI systems must be ‘understandable by design,’ meaning that they must have clear explanations and disclosures to help users and regulators understand how they work and what the outcome means for users.
“Regulators need to see and understand how AI systems are designed, which actors are involved, the relationship between different actors, and the flow of resources (data, compute) through the different stages of development and deployment,” the report reads.
The guidelines state that AI companies are required to establish accessible and effective grievance redressal mechanisms as part of their accountability obligations. The mechanism should ‘make it easy and reliable for individuals to report harms or concerns’ and should be clearly visible while being available in multiple languages and formats.
The framework also talks about the malicious use of AI via deepfakes, algorithmic discrimination, lack of transparency, systemic risks, and threats to national security.
“These risks are either created or exacerbated by AI. An India-specific risk assessment framework, based on empirical evidence of harm, is critical,” the report says.
The report also lays particular emphasis on ensuring that outcomes of AI systems are ‘fair, unbiased, and do not discriminate against anyone, including those from marginalised communities.’
“AI should be leveraged to promote inclusive development while mitigating risks of exclusion, bias, and discrimination,” it states.
Catch all the Technology News and Updates on Live Mint. Download The Mint News App to get Daily Market Updates & Live Business News.