Trouble viewing this email? View in web browser

Friday, Sep 01, 2023
techtalk
By Leslie D'Monte

Is Paytm building an AGI software stack? or is it AGI-washing?

In his 23 August letter to shareholders as part of the Paytm's 492-page 2023 annual report, Vijay Shekhar Sharma, Founder, CMD & CEO of Paytm, owned by One 97 Communications, said, among other things, that as Paytm focuses on "small mobile credit with high credit quality" that fully complies with regulatory guidelines, it will require "sophisticated capabilities in AI (artificial intelligence) and other technologies". He added that Paytm is building "an India scale AI system to help various financial institutes in capturing possible risks and frauds, while also protecting them from new kinds of risks due to advancement in AI".

Further, Sharma said Paytm "is investing in AI with an eye on building an Artificial General Intelligence (AGI) software stack". He added that by building this stack in India, Patym is not only making it "our country’s tech capability" but also "creating something that could be leveraged outside India". In May, too, he had spoken about how the "advent of early-stage AGI in 2023" would help him make Paytm's business more efficient.

Picture courtesy of Mint

Developing an advanced AI system is logical for the financial sector, which has been an early adopter of AI tools. However, talking about building an AGI stack and exporting it eventually from India to other countries may tick the 'Made in India' box but is a completely different proposition. It raises many questions, especially since Paytm is now a listed company on the Indian bourses with its shareholders and potential investors keenly listening to the management discussion to understand the company's roadmap.

     

Given the company's experience and technology expertise in building services including Paytm Wallet, Paytm QR, Paytm Soundbox, Paytm Postpaid, Merchant Cash Advance and FASTag, one can safely assume that the management team would know the difference between AI and AGI -- especially so, since Sharma himself has made that distinction in his letter to the shareholders.

However, since Paytm has not defined what it means by AGI, and neither provided any details of what it means by an "early-stage AGI software stack", we will have to work with the broadly-accepted definition of AGI--a system that can reason and think for itself like humans, is conscious, and can emote too--to examine the company's claim. An email sent to Paytm late Thursday night, requesting for details on the AGI software stack plans, remained unanswered till going to press.

How far or close we from AGI?

Physicist Mark Gubrud was the first to use the term AGI in 1997. However, it was Webmind founder Ben Goertzel and DeepMind co-founder Shane Legg who were instrumental in popularising the term around 2002. Companies that typically talk about AGI and its applications are mostly the big tech companies including Google, Microsoft, IBM, Meta, OpenAI, Nvidia, DeepMind (a Google unit), Anthropic, Hanson Robotics (best known for its Sophia humanoid), and Elon Musk's Neuralink (developing implantable brain-computer interfaces).

In his 2006 book, The Singularity Is Near, Raymond “Ray" Kurzweil, an American author, computer scientist, inventor and futurist, predicted, among many other things, that AI will surpass humans, the smartest and most capable life forms on the planet. His forecast is that by 2099 machines would have attained equal legal status with humans. According to the broadly-accepted definition of AGI, such a system would typically think and act like humans, and eventually even surpass human intelligence. Achieving this goal is known as AI Singularity or Artificial General Intelligence (AGI); crossing this barrier would require that such an AI's intelligence to exceed that of the most intelligent humans, making it a sort of Alpha Intelligence that can call the shots and even enslave humans.

Picture courtesy of Mint

According to IBM, AGI would perform on par with another human, while Artificial Super Intelligence (ASI), also known as superintelligence, would surpass a human’s intelligence and ability. That said, many experts treat AGI and ASI as AGI.

On 30 May, 2022, Elon Musk tweeted: "2029 feels like a pivotal year. I’d be surprised if we don’t have AGI by then. Hopefully, people on Mars too." Cognitive scientist Gary Marcus betted $100,000 against the timeline while Indian-American technology entrepreneur Vivek Wadhwa concurred with another $100,000 bet. Other experts joined issue and raised the cumulative bet to $500,000 but Musk did not respond.

Around the same time, Yann LeCun, Chief AI Scientist at Meta, said in a LinkedIn post that "I think the phrase AGI should be retired and replaced by "human-level AI". There is no such thing as AGI. Even human intelligence is very specialized. We do not realize that human intelligence is specialized because all the intelligent tasks we can think of are task that we can apprehend.

But that is a tiny subset of all tasks. The overwhelming majority of tasks are completely out of reach of un-augmented human intelligence..."

Regardless of these debates, intelligent systems that fit the AGI definition are currently in the realm of science fiction, including software like the one portrayed in the Hollywood film 'Her', starring Joaquin Phoenix, or a machine (aka a Skynet in Terminator) or an energy ball (like Jarvis in Iron Man) or an android (aka i.Robot) or a humanoid (like Bicentiannial Man or Ultron). Whatever contraption you imagine it to be, such a system would be conscious of itself, and be able to think and converse like most humans.

And while it’s tempting to label AI developments like driverless cars and trucks as AGI, they are still higher manifestations of “weak or narrow AI” as I explained in an earlier column. Even unsupervised algorithms that unearth hidden patterns or data groupings without the need for human intervention do not qualify as true AGI, even though they may be bringing us a step closer to it.

What about Generative AI?

Foundational models, a term first popularized by the Stanford Institute for Human-Centered Artificial Intelligence, are trained on a broad set of unlabeled data that can be used for different tasks such as generating (hence the word, generative AI) images, video, language, etc., with minimal fine-tuning. The third iteration of the Generative Pre-trained Transformer (GPT-3) with 175 billion parameters impressed many with its potential to write human-like poems, articles, books, tweets, resumes, and even code.

The reason is that LLMs like GPT-3 and LLM-powered chatbots like ChatGPT are trained on billions of words from sources like the internet, books, and sources, including Common Crawl and Wikipedia, which makes them more knowledgeable than most humans. GPT-4 is rumoured to have 100 trillion parameters (dismissed by Sam Altman as “rubbish”), and GPT-4 significantly outperforms GPT-3. Further, OpenAI has already filed a trademark application for ‘GPT-5’ with the US Patent and Trademark Office (USPTO), which covers computer software for generating human speech and text, as well as natural language processing, generation, understanding, and analysis.

Other examples include Llama-2, BERT, Lora, BLOOM, and Hugging Face, to name a few. Earlier neural networks were narrowly tuned for specific tasks. With a little fine-tuning, foundation models can handle jobs from translating text to analyzing medical images, and even generating new molecules for drug discovery. But however impressive these developments are, even these do not qualify as true AGI if one does not dilute the definition.

These developments have both excited and unnerved people. Some AI experts including Elon Musk have called for a six-month moratorium on building such foundation models (which include LLMs), while others have even equated the risk of AI to that from a nuclear war. Others like Yan LeCun and Andrew Ng insist that AI is far from becoming sentient, and that the benefits of AI far outweigh its perceived risks.

The AGI debate

Altman wrote in February that AGI has the potential to give everyone "incredible new capabilities" and "provide a great force multiplier for human ingenuity and creativity" but would also come with "serious risk of misuse, drastic accidents, and societal disruption", which explains why "the developers of AGI have to figure out how to get it right". He suggested a gradual transition to give people, policymakers, and institutions time to digest the benefits and pitfalls of these systems, and to put regulation in place.

To be sure, the very AI that is already empowering our smartphones, cameras, driverless vehicles, low-cost satellites, chatbots, smart robots in our offices and homes, and helping farmers by providing flood forecasts and warnings, can also generate AI clones, deepfakes, malware, and other mischievous applications. These models can plagiarize, have the potential to replace thousands of routine jobs, and also pose security and privacy risks coupled with inherent gender and race biases. But what we fear most is that an intelligent AI may soon become sentient and subjugate us.

"Society will face major questions about what AI systems are allowed to do, how to combat bias, how to deal with job displacement, and more," said Altman. In FY 2023, for instance, Paytm had an average of 32,798 on-roll (29,503 active on-roll) and 1,589 off-roll employees worldwide, inclusive of all its subsidiaries but excluding contract labour, according to its annual report. What would be the impact of this level of automation on the workforce, if indeed the company has managed to crack AGI, and plans to implement it in the near future?

Second, is the cost factor. A traditional AI framework that deals with task-specific models typically encompasses gathering, processing, storing and analysing data using mathematical models, machine learning and deep learning algorithms, and various statistical methods to glean insights from the data. The hardware framework and software components would include CPUs (central processing units), GPUs (graphics processing units), operating systems, virtualization tools, and containerization solutions along with programming languages, libraries, and frameworks such as TensorFlow and PyTorch.

According to Paytm's annual report, the company has spent Rs.1332.60 crore spent till 31 March on "strengthening and expanding our technology-powered payments platform". According to Paytm's latest financial reports, the company has $1.40 billion in cash and cash equivalents. An elementary AGI stack will cost upwards of millions of dollars, even if it’s a progression of an existing AI stack since it will require humungous amounts of data (synthentic, or otherwise), a lot of computing power, and highly-skilled employees who have expertise in deep learning neural networks, among other things.

Further, India does not currently have rules that specifically regulate the use of AI, and neither do the Reserve Bank of India or Securities and Exchange Board of India have any specific regulations. How can the authorities, then, regulate AGI?

On 20 August, Legg (DeepMind) tweeted that "By AGI I mean: better than typical human performance on virtually all cognitive tasks that humans can typically do. I think we're not too far off. That some people now want to set the AGI bar so high that most humans wouldn't pass it, just shows how much progress has been made!" Having popularised the term AGI, Legg makes a fair point.

But, then, one can also argue that there's a lot of AGI-washing that companies do by rebranding AI as something it is not, and later diluting the definition to suit their needs.

Digital mobile payment firm Paytm, founded by One97 Communication, pioneered mobile payments in India and has undoubtedly helped strengthen the country's digital payments and financial services ecosystem over the last decade. Paytm says its strength lies in owning each layer of the payment stack, which has allowed it, its associates and financial institution partners, to offer all the services cited above. Further, its Canada-based unit cutting-edge Paytm Labs continues to enhance its AI and big data features.

That said, while Paytm clearly has advanced AI capabilities and is well on the road to building an India-scale AI system to help financial institutes capture "possible risks and frauds, while also protecting them from new kinds of risks due to advancement in AI", as Sharma pointed out in his letter to the shareholders, Patym should be transparent about its AGI stack plans, if indeed it is working on an AGI stack that adheres to the commonly-accepted definition of AGI. Else, it will degenerate to an example of AGI-washing.

Hope you folks have a great weekend, and your feedback will be much appreciated.

Download the Mint app and read premium stories
Google Play Store App Store
Livemint.com | Privacy Policy | Contact us You received this email because you signed up for HT newsletters or because it is included in your subscription. Copyright © HT Digital Streams. All Rights Reserved