Trouble viewing this email? View in web browser

Friday, Sep 15, 2023
techtalk
By Leslie D'Monte

Will Google Gemini and Meta’s new AI model outperform GPT-4?

Generative artificial intelligence (AI) models, which are being used to create new content including text, images, audio, video, code, and simulations with the help of natural language ‘prompts’, are being used in at least one business function, according to one-third of respondents who participated in the August McKinsey Global survey. Moreover, 40% of respondents said their organizations will increase their investment in AI overall because of advances in generative AI.

The respondents appear to be on the right track, with big tech companies upping the game with every passing day. Consider these developments. Meta, according to a report in the Wall Street Journal, is working on a new AI system that is expected to be more powerful than OpenAI’s GPT-4. The system, which could be ready next year according to the report, is aimed at helping companies develop sophisticated text analysis and other services. The new AI model is expected to be several times more powerful than Llama 2, released just two months ago. The report notes that while Llama 2 was made available using Microsoft’s Azure cloud computing platform, Meta plans to train the new model using its own infrastructure.

     

Meta is currently in the process of acquiring Nvidia’s H100s chips for AI training (According to Nvidia, the H100 is up to nine times faster for AI training and 30 times faster for inference than the A100), and is building data centres to help train the model. Zuckerberg is reportedly pushing for the new model to be open-sourced and made available for free for companies to build AI-powered tools, similar to Meta’s previous AI offerings. However, the WSJ report notes that this approach could have potential downsides, including risks around the use of copyrighted information and the spread of misinformation.

Google, too, is planning to regain its glory with Gemini. After all, its Transformer models are the base for all foundation models and large language models (LLMs). Gemini is being touted as Google’s “next-generation foundation model”, which is still in training. Once fine-tuned and rigorously tested for safety, Gemini will be available in various sizes and capabilities.

Semiconductor research and consulting firm Seminanalysis said in a 28 August paper by Dylan Patel and Daniel Nishaball that Google Gemini will smash ChatGPT-4 by 5x by 2023 end and 100x by the end of 2024. According to the authors, “Google had all the keys to the kingdom, but they fumbled the bag”. They were referring to Google’s MEENA model that was released before the pandemic broke out. “The Meena model has 2.6 billion parameters and is trained on 341 GB of text, filtered from public domain social media conversations. Compared to an existing state-of-the-art generative model, OpenAI GPT-2, MEENA has 1.7x greater model capacity and was trained on 8.5x more data,” reads the 28 January 2020 blog. But Google could not capitalize on MEENA’s potential, and on 1 December 2022, OpenAI’s ChatGPT stole the show. Google, according to Seminanalysis, will get its mojo back with Gemini.

Expectedly, OpenAI’s Sam Altman countered with a tweet on 29 August: “Incredible Google got that Semianalysis guy to publish their internal marketing/recruiting chart lol”. Elon Musk responded: “Are the numbers wrong?” to which Patel said: “They are correct”. The debate, as we all know, has just begun.

Meanwhile, UAE-based Technology Innovation Institute (TII) recently unveiled Falcon 180B (180 billion parameters), which was trained on 3.5 trillion tokens using 4096 GPUs. This is 2.5 times bigger than Llama2 and has employed four times the computational resources. The new release is a significant escalation from its prior versions, which had 1B, 7B, and 40B parameters. Falcon 180B not only outpaces GPT-3.5 in multiple benchmarks but also reinforces the surging open-source trend in foundation models.

The open-source community, propelled initially by Stable Diffusion and later by Llama and Falcon, is narrowing the performance gap with commercial models like GPT-4. I wrote earlier about ‘Meta’s Llama 2: Why Open-source LLMs are the joker in the Generative AI pack’. In my earlier column titled ‘Five trends that may change the course of Generative AI models’, I spoke about the rise of smaller open-source large language models (LLMs). Big tech companies like Microsoft and Oracle were strongly opposed to open-source technologies but embraced them after realizing they couldn’t survive without doing so. Open-source language models are demonstrating this once again.

Why is Nvidia wooing India?

Five months ago, Union minister for electronics and information technology Ashwini Vaishnaw hinted that India might soon get its own version of the generative AI chatbot, ChatGPT, which has both excited and unnerved people with its potential to allow users to generate new content like text, images and video with just ‘prompts’ in a natural language like English. “Wait for a few weeks. There will be a big announcement,” he said while speaking at the India Global Forum Annual Summit on 27 March.

Vaishnaw’s statement generated both excitement and scepticism. Excitement because India has proven technical skills and world-class research institutions, including the Indian Institutes of Technology (IITs), the Indian Institute of Science (IISc) and the Centre for Development of Advanced Computing (C-DAC), besides housing hundreds of integrated circuits and chip designing companies, and building an exemplary digital India stack comprising the Aadhar-enabled payment system (AEPS), Unified Payments Interface (UPI), Open Network for Digital Commerce (ONDC), and Account Aggregator (AA), to name a few applications.

Yet, there was scepticism, too, since India is yet to build a semiconductor fab that can make chips powerful enough to power a homegrown large language model (LLM) that can compete with the likes of OpenAI’s ChatGPT. Further, while India may have a very large repository of local datasets, the graphics processing units (GPUs) and AI platforms that power an LLM-based chatbot can cost hundreds of millions of dollars.

When OpenAI CEO Sam Altman visited India in June, it generated a lot of interest against this backdrop. However, Altman ended up making news for a very different reason. When Rajan Anandan, Managing Director at PeakXV Partners (formerly Sequoia Capital India & SEA), asked him if any startup in India could build a ChatGPT-like model with about $10 million, Altman said it was “completely hopeless to challenge us in training foundational models...but you can try...”. India Inc., including Anandan and the CEO of Tech Mahindra, C.P. Gurnani, countered by expressing their determination to compete with OpenAI. Anandan tweeted: “...Five thousand years of Indian entrepreneurship have shown us that we should never underestimate the Indian entrepreneur. We do intend to try...”

Even though Altman was quick to clarify that his quote was “taken out of context” and that “the question was about competing with us with $10 million, which I really do think is not going to work...but I still said try!”, India Inc. remained disappointed.

But when Jensen Huang, founder, president, and CEO of Nvidia, whose GPUs power ChatGPT, visited the country earlier this month, he renewed the hope of firing up India’s AI ecosystem and even building a homegrown ChatGPT-like bot.

Huang’s visit was much talked about despite the presence of top dignitaries who were prepping for the two-day 2023 G20 New Delhi Summit that began on 9 September, simply because governments and companies around the world are now enamoured with the power of AI and generative AI models, notwithstanding the concerns around them.

Nvidia, along with big tech companies such as Google, Microsoft, OpenAI, Intel and AMD, are globally powering the AI drive. Huang, who owns 3.5% of Nvidia, is convinced that the “global generative AI race is in full steam”. He is banking on the fact that “data centres worldwide are shifting to GPU computing to build energy-efficient infrastructure to support the exponential demand for generative AI”.

Central processing units (CPUs) are also used to train AI models, but GPU’s parallel computing feature allows devices to run several calculations or processes simultaneously. The training of AI models involves millions of calculations, and parallel computing helps speed up the process. This has transformed Nvidia from being just a gamer’s delight to becoming the poster boy for the world of AI and Generative AI. It is now the darling of investors who value it at about $1.13 trillion as of 8 September, pegging Huang’s own net worth at a little over $40 billion.

According to a 27 May report by investment bank JPMorgan, Nvidia can garner about 60% of AI this year on the back of its hardware products, such as GPUs and networking products. Nvidia only designs chips (called fabless), which are then manufactured by companies such as Taiwan Semiconductor Manufacturing Company (TSMC) and Samsung Electronics Co. Ltd. Its competitors include Advanced Micro Devices, Inc. (AMD), Intel Corp. (Intel), Alibaba Group, Alphabet Inc., Amazon, Inc., Meta, Qualcomm, Broadcom, and Baidu, Inc.-- all of whom make their own chips. Google has its tensor processing units (TPUs), and both partners and yet competes with Nvidia. Microsoft, too, is reportedly working on its own AI chips that can be used to train LLMs to avoid reliance on Nvidia.

Competition may clearly bite into the share of Nvidia, but it clearly is the leader for now. However, Huang knows well that Nvidia will need to increasingly tap more lucrative markets like India to offset any potential loss of revenue. With India Inc. reciprocating, his path has only become easier. You may read more about this here ‘RIL, Tata sign AI deals with Nvidia’.

IN CHARTS



AND THERE’S MORE TO READ

ChatGPT is guzzling water

A new study published by the University of Colorado Riverside and the University of Texas Arlington reveals the water footprint of AI models, and it was revealed that “ChatGPT drinks 500ml of water” for every conversation. The study released that training GPT 3 at Texas Microsoft’s state-of-the-art US data centre consumes nearly 7,00,000 litres of water, and the numbers are expected to be three times higher in the data centres of Asia, the paper named “Making AI Less “Thirsty”: Uncovering and Addressing the Secret Water Footprint of AI Models,” said which is currently awaiting peer review.

Companies want companies to trust AI - but not completely

Top AI influences: Time100 list

TIME magazine’s inaugural TIME100 AI list showcases the 100 most influential people in AI under four categories: Leaders, Innovators, Shapers, and Thinkers. The 2023 TIME100 AI list features 43 CEOs, founders and co-founders: Elon Musk of xAI, Sam Altman of OpenAI, Andrew Hopkins of Exscientia, Nancy Xu of Moonhub, Kate Kallot of Amini, Pelonomi Moiloa of Lelapa AI, Jack Clark of Anthropic, Raquel Urtasan of Waabi, Aidan Gomez of Cohere and more are among in the list. Several Indians of Indian origin have also made it to the ‘TIME100 AI List’. Sneha Revanur is the youngest to have made it to the TIME100 list. The 18-year-old is the founder and president of Encode Justice.

Apple is spending millions daily to bring ChatGPT-like capabilities to Siri

Apple is investing heavily in research and development to enhance Siri’s conversational capabilities and is working on multiple AI models across different teams. The company is focusing on developing image, language, and multimodal AI models and is also testing its own AI chatbot.

Hope you folks have a great weekend, and your feedback will be much appreciated.

Download the Mint app and read premium stories
Google Play Store App Store
Livemint.com | Privacy Policy | Contact us You received this email because you signed up for HT newsletters or because it is included in your subscription. Copyright © HT Digital Streams. All Rights Reserved