Fear of AI surfaces government’s controlling tendencies in its latest policy

Critics argue that government control could hinder deployment speed in the crucial field of AI. (Stock Image)
Critics argue that government control could hinder deployment speed in the crucial field of AI. (Stock Image)

Summary

  • Recent developments, including the EU’s Artificial Intelligence Act and the Global Partnership on AI, provide models for oversight and regulation

An advisory by the ministry of information technology, issued 1 March, has sent shockwaves through India Inc. The government wants platforms using Generative AI (artificial intelligence) models, or LLMs (large language models), to explicitly seek government permission before deployment. They must also put an “under-trial" or “under-testing" label on their websites.

Such platforms, including Google’s Gemini (which has suffered recent controversy), ChatGPT and Krutrim AI, must clearly state that the GenAI model could give incorrect information. Users should also be explicitly asked for consent before their personal data is processed.

The directive's broad wording initially left its scope ambiguous. Minister Rajeev Chandrasekhar subsequently clarified that the labelling requirement would only target major platforms and intermediaries as outlined in the IT Rules. The government has requested status reports and specify action taken from relevant platforms within 15 days, amid concerns over the possible use of AI to create and spread false political content ahead of upcoming elections.

More clarifications are, however, necessary about the advisory since industry remains somewhat confused as to the scope. Does any entity which deploys AI need to seek government permission and if so, do they need to share details of the algorithm? Queries also arise about whether the labelling mandate affects various sectors, including telecoms, HR websites, banks, and inventory management systems, using AI.

As somebody recently said, “We should not need to ask the government for permission to do maths." Any sort of government control and permission is likely to slow down deployment. In an area like AI induction, where speed is of the essence, this may be unacceptable.

A cursory glance shows there are well over 500,000 open source AI algorithms available for anybody to use. It would be a useless and time-consuming task for the ministry to sort through these and grant granular permission to some entity that wished to use them. In addition, there are in-house AI developments done by entities and individuals for many purposes, which would also require scrutiny if this advisory is taken literally.

The issue of protection for intellectual property also arises – if a company develops an algorithm which may have probable monetary value, it may not wish to share the details with anybody, including the government. There may also be gaps in the legislation and regulations covering such situations and these cannot be adequately addressed until the general elections are over.

There are two recent developments that could give policy makers a better roadmap for oversight and regulation since these set down both broad principles about AI regulation as well as dealing with technical specifics of combating fake content.

One is the European Union’s (EU) Artificial Intelligence Act which was signed in December 2023. This – the first Act regulating AI –provides a benchmark for such regulation. India is also part of the Global Partnership on AI which is trying to put together a broad set of principles based on similar logic for AI regulation.

The second is a commitment at the Munich Security Conference in February by the world’s largest tech companies to combat the misuse of AI in elections - “The Tech Accord to Combat Deceptive Use of AI in 2024 Elections". Over 40 countries including India, the US and UK have elections this year – affecting the fate of over 4 billion individuals. How corporates like Meta, Microsoft, Google, X, OpenAI, Tik-Tok, etc, which are all signatories intend to do this is important.

The EU framework classifies AI according to risks. Higher risk leads to more stringent oversight, and more obligations for providers and users. Limited risk systems need only comply with transparency requirements that allow informed decisions – labelling as suggested in the Meity advisory is enough.

In the EU, AI that affects safety or fundamental rights are “high risk". This could be AI used in products like toys, aviation, cars, medical devices, etc. Use of AI in certain specific areas must be registered. This includes biometric identification, education, employment, access to essential services and financial benefits, law enforcement, immigration, etc. Such high-risk AI systems must be assessed before rollout, and reviewed regularly while in use. These areas are places where the Meity should be focussed.

Some systems are banned in the EU. These include cognitive manipulation of people (such as election-related fake news), toys that encourage dangerous behaviour, and social scoring that classifies people based on behaviour, socio-economic status or personal characteristics. Some real-time and remote biometric systems, like facial recognition may be used only with court approval to identify and apprehend criminals after a serious crime has been committed.

The tech accord signatories will collaborate to detect and block online distribution of fake AI content about politics, and provide transparency about AI usage in generating political content. They will attempt to track the origin of deceptive political content and raise public awareness about it. They will block such content on their platforms. Given the high stakes, this is obviously not just a symbolic commitment.

In summary, it is imperative for the government to promptly provide clear and precise explanations regarding the advisory's implications and the types of AI deployments that necessitate explicit permissions. Given that legislation on this matter cannot be enacted until after the elections, there is a risk of a legal void. Potential regulatory overreach during this period needs to be carefully monitored.

Catch all the Business News, Market News, Breaking News Events and Latest News Updates on Live Mint. Download The Mint News App to get Daily Market Updates.
more

topics

MINT SPECIALS

Switch to the Mint app for fast and personalized news - Get App