The world must not wait for one more Hiroshima to regulate AI

As AI, particularly Generative AI, races forward, regulation seems to be left far behind.
As AI, particularly Generative AI, races forward, regulation seems to be left far behind.


  • An IAEA-like body is the most viable option for managing the development of dual-use technology

It is telling that the G7 summit in Japan last month was focused on three big issues, of which two were predictably the Ukraine war and the rise of an assertive China. The third was a bit unusual—Artificial Intelligence (AI) and how to regulate perhaps the most powerful technology created by man. It was also significant that the meeting was in Hiroshima, where the most destructive technology created in the 20th century was unleashed. Hiroshima was the ‘Never Again’ moment in nuclear warfare, and it directly led to global regulation with the International Atomic Energy Agency (IAEA) and the Non-Proliferation Treaty. Like nuclear, AI is a dual use technology, promising amazing scientific breakthroughs like curing cancer, but also raising the spectre of a destructive super-intelligence and gross misuse. As AI, particularly Generative AI, races forward, regulation seems to be left far behind. In fact, I have often spoken of the need for humanity to get together to tame this tech before we face another Hiroshima moment in AI.

There has been a profusion of opinion on this, and the fears seem to have eclipsed the excitement around it. Regulating AI is not easy—it is a borderless technology moving at lightning speed across a geopolitically fractured world. One clear-eyed descriptions of regulating AI that I found was by John Thornhill of the Financial Times, who wrote about the 4D challenge of regulating AI ( The first D, in his view, is Discrimination—the power of AI and Machine Learning is in how to spot outliers in data patterns. This is how one can spot faults in a production line, or cancerous cells among normal cells; however, this same property can lead to bias on racial, gender or nationalistic grounds too as the AI may see some of these as not fitting the pattern. Disinformation is the second issue; if we consider “WhatsApp University" as the most efficient spreader of misinformation, meet the generator! Thornhill’s third D is Dislocation, primarily of jobs, as powerful AI engines like ChatGPT intrude into tasks done by humans. The final one is Devastation—the fear that super-intelligent AI, knowingly or otherwise, would cause the destruction of the human race.

So how do we regulate AI and manage these four D’s? There are multiple options being talked about. The first is licensing, which has been proposed by no less than OpenAI CEO Sam Altman. Altman suggested to the US Congress that AI companies would need some kind of licensing to operate so that they could be regulated to stick to the norms of the license. Unlicensed startups would not be able to create AI. Many see this as a self-serving option, protecting the incumbents, including OpenAI, against open source and newer competitors. Additionally, it is unlikely that China or Russia would cooperate with a US-led licensing regime. Another is a Food and Drug Administration (FDA) kind of use-case-led regulation. Much like the FDA regulates new drugs and treatments in the US and asks for proof of efficacy and no harm, a body should regulate the use of AI in sensitive areas like healthcare or aviation. To me, this proposal is dead on arrival, the time taken to regulate use case-by-use case and the global cooperation needed for it will make this unviable. A third is a CERN-like approach, where countries and companies get together and do all research cooperatively like how the Higgs Boson or the God Particle was discovered at CERN. A variation of this is the ‘isolated island’ approach, where all research happens in a secure, air-gapped environment and let out in the wide world only after it has proved beneficial in this protected environment. Again, the thought is noble, but the practical efficacy of this seems suspect.

Another proposal gaining ground is to use the IAEA as a model for a global regulatory body on AI. Altman, among other industry doyens, has been talking about this in his World Tour (including India). The IAEA is not perfect, is toothless in many areas, and has created an unequal world among the nuclear haves and have-nots. But to give credit where it is due, there has not been a nuclear war since Hiroshima. There are many differences between AI and nuclear energy; for one, AI is much more democratized, with any good software engineer creating new AI tools, and it does not need one to invest in a nuclear reactor. The world is also not the same, and it will be difficult to rally the world around this time. However, this is the most viable model, and I hope that the ‘Hiroshima moment’ to initiate global regulation is the G7 summit and not something far more devastating.

Jaspreet Bindra is a technology expert, author of ‘The Tech Whisperer’, and is currently pursuing his Masters in AI and Ethics from Cambridge University

Catch all the Business News, Market News, Breaking News Events and Latest News Updates on Live Mint. Download The Mint News App to get Daily Market Updates.


Switch to the Mint app for fast and personalized news - Get App