We must rein in the precocious Generative AI children. But how?
Generative AI, which refers to the broad swathe of AI models and tools (ChatGPT, Dall-E, Mid-Journey, Stable Diffusion, Bing, Bard, LLaMA, PaLM, etc.) designed to create (or generate -- hence, the term) new content such as text, images, videos, music, or code (hence, the term ‘multi-modal’), has undoubtedly polarized AI experts with some even calling for a six-month moratorium on building new systems and others arguing against the idea on the grounds that the benefits of AI far outweigh perceived risks.
More than a thousand people, including Elon Musk, Yoshua Bengio, Stuart Russel, Gary Marcus and Andrew Yang, called for a six-month moratorium on training systems that are “more powerful than GPT-4”, arguing that such systems should be developed only when the world believes it can contain the risks.
There’s good reason to do so. Consider the case of ChatGPT, which has captured the imagination of the masses and made AI a household name but is just one face of Generative AI--albeit the most popular and widespread one, with more than 100 million users since its launch on 30 November 2022.
Many universities in Japan, including Sophia University, the University of Tokyo and Tohoku University, have either banned or shown reluctance to encourage the use of ChatGPT by students to write essays, theses, or homework. In India, Bangalore’s RV University did likewise, and the Dayananda Sagar University and the International Institute of Information Technology (IIIT-B) in Karnataka have also expressed their displeasure over the use of ChatGPT by students. France-based Sciences Po, too, has banned the usage of ChatGPT. Countries also fear ChatGPT. Italy temporarily blocked ChatGPT over data privacy concerns. Russia, China, North Korea, Cuba, Iran, and Syria followed suit.
LLMs or Gollums? Is the backlash warranted?
It’s the exponential progress in generative AI models that are used to create new content, including audio, code, images, text, simulations, and videos, appears to have alarmed many ever since the launch of OpenAI’s ChatGPT -- they believe these models will think and act like humans, plagiarize the work of artists, and replace thousands of routine jobs. A 26 March note by Goldman Sachs corroborates that generative AI could expose about 300 million full-time global jobs to automation.
While language models, including ChatGPT, GPT-4, Google Bard, and Bing Chat, do hallucinate (convincingly provide wrong answers) and provide incorrect information, they are also transforming and disrupting the workplace and society at large. Besides, it’s almost Utopian to expect big technology companies, which are not only trying to outrun each other in the race for AI but also have to show returns to shareholders, to halt the progress of these models -- the economic compulsions are too high in a recessionary phase and may override societal considerations.
More importantly, the fear is that Generative AI is only getting smarter with each passing day, and researchers are unable to understand the ‘How’ of it. Simply put, since large language models (LLMs) like GPT-4 are self-supervised or unsupervised, researchers are unable to understand how they train themselves and arrive at their conclusions (hence, the term ‘black box’).
In ‘The AI Dilemma’, Centre for Humane Technology authors -- Tristen Harris and Aza Raskin -- argue that “AI may help us achieve major advances like curing cancer or addressing climate change. But the point we’re making is: if our dystopia is bad enough, it won’t matter how good the utopia we want to create.” Among other things, they say that “Guardrails you may assume exist, actually don’t. AI companies are quickly deploying their work to the public instead of testing it safely over time. AI chatbots have been added to platforms children use, like Snapchat. Safety researchers are in short supply, and most of the research that’s happening is driven by for-profit interests instead of academia.
Harris should know. He studied persuasive technology at Stanford University and built a company called Apture, which Google acquired. It was here that he first sounded the alarm on the harms posed by technology that manipulates attention for profit.
Harris and Raskin underscore that Generative AI is formidable since it can treat everything as a language and not just predict the next word but also the next image, sound, etc. Even “fMRI data becomes a kind of language, DNA is just another kind of language. And so suddenly, any advance in any one part of the AI world became an advance in every part of the AI world. You could just copy-paste, and you can see how advances now are immediately multiplicative across the entire set of fields”. All these developments have prompted Harris and Raskin to call these LLMs ‘Gollum class AIs’ (the fictitious creature in ‘The Lord of the Rings’).
Urgent need to put up guardrails: UNESCO’s clarion call
The United Nations Educational, Scientific and Cultural Organization (UNESCO) is categorical that AI “cannot be a no-law zone’. It acknowledges that AI can provide millions of students with support to complete secondary education, fill an additional 3.3 million jobs, and assist us in checking the spread and the aftermath of the COVID-19 pandemic but cautions that these technologies “also generate downside risks and challenges, derived from malicious use of technology or deepening inequalities and divides”. UNESCO recommends international and national policies and regulatory frameworks and a “human-centred AI” that will benefit the “greater interest of the people, not the other way around”.
In November 2021, the 193 member states at UNESCO’s General Conference adopted the Recommendation on the Ethics of AI. On 19 April, UNESCO exhorted all governments to implement this framework “without delay”. “Industry self-regulation is clearly not sufficient to avoid these ethical harms, which is why the Recommendation provides the tools to ensure that AI developments abide by the rule of law, avoiding harm, and ensuring that when harm is done, accountability and redressal mechanisms are at hand for those affected,” it said in a press statement.
To date, more than 40 countries in all regions of the world are already working with UNESCO to develop AI checks and balances at the national level. UNESCO will present a progress report at its Global Forum on the Ethics of AI in Slovenia in December 2023.
In ‘The AI Dilemma’, Harris and Raskin say: “...Our friend Yuval (Noah) Harari, when we were talking to him about this (The AI Dilemma), called it this way, he said, what nukes are to the physical world, AI is to the virtual and symbolic world”.
Indeed! Nuclear energy can both be very helpful and has the potential to destroy the world too. But the Treaty on the Prohibition of Nuclear Weapons (TPNW) now bans the use, possession, testing, and transfer of nuclear weapons under international law. That said, it took the world almost 65 years to stem nuclear weapons were on since the first atomic bomb was dropped on Hiroshima and Nagasaki in 1945. Harris and Raskin assert that: “...nukes don’t make stronger nukes, but AI makes stronger AI”. So, we may not have 65 years to put guardrails around AI before it gallops out of control.
With the help of GPT-4, for instance, Auto-GPT can already generate code autonomously and subsequently “debug, develop, and self-improve” the same through recursive mechanisms. Meanwhile, Amazon.com Inc. has joined Microsoft and Google in the Generative AI race with its AI tool called Bedrock, which allows foundational models (FMs) from AI21 Labs, Anthropic, Stability AI, and Amazon to be accessible via an API. Further, if the code completion tool GitHub’s Copilot (owned by Microsoft) offers complete code snippets based on context, Amazon has announced the preview of Amazon CodeWhisperer--its AI coding companion. Even Elon Musk, who signed the six-month AI moratorium letter, is working on TruthGPT.
If it’s some consolation, Sam Altman said there would not be a ChatGPT-5, at least not in the near term. He was speaking virtually at an event at the Massachusetts Institute of Technology (MIT). “As capabilities get more and more serious, the safety bar has got to increase,” Altman said, adding, “I think moving with caution and an increasing rigour for safety issues is really important. The letter, I don’t think, is the optimal way to address it.” Altman added that further progress would not come from making models bigger, according to the Wired. “I think we’re at the end of the era where it’s going to be these, like, giant, giant models,” he told an audience at an event held at MIT late last week. “We’ll make them better in other ways.”
But these verbal assurances won’t help, and neither will the banning of ChatGPT-like tools or LLMs provide an answer since it’s not wise to throw the baby with the bath water. Hence, given the exponential pace at which LLMs have developed, governments will have to join forces with international bodies like UNESCO to develop global frameworks, which they can subsequently apply in their own countries while keeping in mind their respective cultures, ethnicities, and other sensitivities (since AI biases stem from these considerations too) -- but all with a tremendous sense of urgency.
QUOTE OF THE WEEK
‘I don’t know if humans can survive AI’ -- Yuval Noah Harari said in an interview with The Telegraph
|