Mint Primer: Grave new world: Can we oversee AI decisions?

The ministry of corporate affairs is automating its compliance system with AI and ML tools, but decisions such as serving notices on companies will be left to human officials to make (Mint)
The ministry of corporate affairs is automating its compliance system with AI and ML tools, but decisions such as serving notices on companies will be left to human officials to make (Mint)

Summary

MCA’s new AI-powered compliance system will be rolled out on its MCA21 portal once an ongoing upgrade and migration of forms to high-security ones is completed in a couple of months

The ministry of corporate affairs (MCA) is automating its compliance system with artificial intelligence (AI) and machine learning (ML) tools. But decisions such as serving notices on companies will be left to (human) officials to make. Mint explains the hybrid approach:

What is MCA mandating and why?

MCA’s new AI-powered compliance system will be rolled out on its MCA21 portal once an ongoing upgrade and migration of forms to high-security ones is completed in a couple of months, Mint reported, citing an unnamed official. However, while the system will draw up a list of errant companies, only an authorized official will take a decision in this regard. The idea is to adopt a “human-centric" approach to AI, and give the non-compliant companies time to respond before serving a notice. The approach is not unlike regulators calling for a public discourse on draft laws before finalizing them.

What’s new about this approach?

MCA21 was designed to automate all services related to enforcement and compliance of requirements under the Companies Act. In a “Vision 2019-2024" document, MCA underscored the use of AI, ML and “real time analytics" to develop a common platform to connect all economic and financial regulators’ databases and avoid duplication of data. In March 2020, the Lok Sabha was informed that Version 3 of the MCA21 portal would use AI and ML to enhance “security and threat management" solutions, among other things. This time around, MCA plans to include humans to supervise the AI-powered results.

What does ‘human in the AI loop’ mean?

Keeping someone in the loop typically implies making that person part of, or at least privy to, the decision-making process. Even in highly-automated factories, known as “lights out" plants, there are humans who remain present to halt processes with a “kill switch" in case of an emergency. This concept is now being adopted by policy makers to govern AI.

What are the benefits of this approach?

Generative AI models are known to convincingly provide wrong answers, plagiarize, violate copyrights and trademarks—all without a moral compass. Experts can’t figure out how unsupervised large language models (LLMs) like OpenAI’s GPT-4 arrive at conclusions. And, who is to blame if such a system gives wrong legal or medical advice? Hence, firms now hire humans to moderate content, and data annotators to add labels, categories and other contextual elements to increase the accuracy of the models.

Can humans match the might of AI?

In 1983, Lt. Col. Stanislav Petrov of the Soviet Union prevented a nuclear war by trusting his judgment and ignoring reports of an incoming US missile strike (the computer had mistaken the sun’s reflection off clouds for a missile). But Petrov had 30 minutes to make his decision, while today’s AI systems make decisions in milliseconds. Kobi Leins from King’s College London and Anja Kaspersen from Carnegie Council believe no human has the capacity to understand all these parts, let alone meaningfully intervene.

Catch all the Industry News, Banking News and Updates on Live Mint. Download The Mint News App to get Daily Market Updates.
more

topics

MINT SPECIALS