Trouble viewing this email? View in web browser

Friday, April 12, 2024
techtalk
By Leslie D'Monte

Why Nvidia remains the AI chip boss; at least for a while

I recently interviewed Andrew Feldman, co-founder and CEO of US-based Cerebras Systems, which last month released what it touts to be the world's largest and fastest artificial intelligence "chip" with 4 trillion transistors. According to Feldman, his company's big chip, CS-3, scores over Nvidia's graphics processing unit (GPU) primarily because it allows clients to do all their modeling on a single chip.

One of the fundamental challenges in AI has been distributing a single model over hundreds or thousands of GPUs. Feldman explained that you can't fit the big matrix multipliers (matrix multiplication is a big part of the math done in deep learning models and requires significant computing power) on a single GPU, which Cerebras does, bringing to the enterprise and the academician, "the power of tens of thousands of GPUs but the programming simplicity of a single GPU, helping them do work that they wouldn't otherwise be able to do".

     

Companies like GlaxoSmithKline Pharmaceuticals are using Cerebras to do genomic research and in drug design workflow. Companies like Mayo Clinic, one of the leading medical institutions in the world, are exploring the use of genetic data to predict which rheumatoid arthritis drug would be best for a particular individual. Others are trying to predict how long a patient will stay in a hospital based on their medical history.

Customers like TotalEnergies, the French oil and gas giant, are using Cerebras in oil exploration. Government researchers are including the company’s CS-3 system in giant physics simulations, where they use what's called a simulation-plus-AI or HPC (high performance computing)-plus-AI, where the AI does some training work and recommends starting points for the simulator. You may read the full interview here 'AI is a particularly well-suited tech trajectory for India: Cerebras' Feldman'.

Feldman acknowledges that the competition with Nvidia is stiff. He claims that CS-3 costs about the same as three DGX H100s (Nvidia's AI system that is capable of handling demanding tasks such as generative AI, natural language processing, and deep learning recommendation models) but gives you the performance of seven or 10. He also admits that if a company wants just a few GPUs, Cerebras may not be a good choice—it will give returns if a company wants to employ 40-50 GPUs, which is a higher entry point.

The fact is that Nvidia, with its estimated 80-95% global market share in AI chips, makes it very difficult for any company to compete with it. You may read this piece 'How Nvidia is surfing the AI wave', which I wrote in May 2017, and the one I wrote in September 2023, titled 'Hottest graphics processing unit maker’s India playbook'.

In both these pieces, I have underscored the fact that while central processing units (CPUs) can be likened to the brain of computing devices such as mobile phones, desktop computers, laptops and servers, GPUs are a more powerful version, and unlike CPUs, they are specifically designed to handle multiple tasks simultaneously—a feature known as parallel computing. That makes them more suitable for high-performance computing tasks, like supercomputers, and training large language models (LLMs) or models that can learn and comprehend. Deploying a large number of high-performance GPUs helps in shortening the training time for models such as ChatGPT.

But there are challengers like Cerebras, apart from Intel and AMD. Further, Nvidia's biggest clients including Meta, Microsoft, Google and Amazon too are investing heavily in developing their own GPUs (eg. Amazon’s AWS Trainium and Google TPU).

Data centre GPUs alone could help Nvidia make $87 billion, according to market research firm Omdia.

For now, though, Nvidia remains on top of the AI chip chain.

Where do Americans stand on Generative AI?

Here are a few charts on the state of Generative AI models and how the public feels when it comes to AI regulation, according to market research firm Ipsos. Would the results be vastly different if a similar survey was done in India?

YOU MAY ALSO WANT TO READ

What Elon Musk and Jamie Dimon’s AI predictions mean for the future of humanity

Elon Musk and Jamie Dimon say artificial intelligence will be smarter than humans and transform society. The question now is whether the prognostications of one of the world’s richest people and the head of the US’s largest bank, JPMorgan Chase, will come to fruition, or turn out to be overstated. In remarks this week, both Musk and Dimon joined a chorus of business executives making bold predictions about AI’s potential for dramatic change.

“My guess is that we’ll have AI that is smarter than any one human probably around the end of next year," Musk said in an interview with Nicolai Tangen, CEO of Norges Bank Investment Management, Norway’s $1.6 trillion sovereign fund and one of the largest investors in Tesla. The interview was broadcast on Musk’s social-media platform X. Musk, who is chief executive of Tesla and also runs his own AI company, said AI was the fastest-advancing technology he’s ever seen. He predicted it will probably surpass the collective intelligence of humans in five years.

Dimon, chief executive of JPMorgan Chase, told investors that AI could be as transformative as some of the major technological inventions over the past several hundred years. “Think the printing press, the steam engine, electricity, computing and the Internet, among others," Dimon wrote in his highly anticipated annual letter to shareholders. He has said AI might lead future generations to only work 3½ days a week.

Fake videos of NSE MD and CEO Ashish Chauhan recommending stocks

India's National Stock Exchange has issued a clarification on fake viral audio clips and videos of its MD and CEO Ashishkumar Chauhan recommending specific stocks. The stock exchange said it has observed the use of the face and voice of Chauhan and NSE logo in a few investment and advisory audio and video clips, falsely created using technology. “Such videos seem to have been created using sophisticated technologies to imitate the voice and facial expressions of Shri Ashishkumar Chauhan, MD & CEO of NSE. Investors are hereby cautioned not to believe in such audio and videos and not follow any such investment or other advice coming from such fake videos or other mediums," NSE said in a statement.

It was only last week that I wrote about how public speakers, podcasters, or simply those who love sharing videos and audios on social media sites, need to be aware that AI tools can mimic your voice in seconds and impersonate you.

Source:McAfee

The latest such tool is OpenAI's Voice Engine that can impersonate your voice from just a short 15-second recording. The problem is that most people are unable to distinguish between human- and AI-generated text or images.

OpenAI transcribed over a million hours of YouTube videos to train GPT-4

While The Wall Street Journal reported that AI companies were running into a wall when it comes to gathering high-quality training data, according to The Verge, companies are dealing with this in ways that fall into the hazy gray area of AI copyright law.

How people are really using GenAI

Corporate horror stories, policy restrictions, and hallucinations understandably give people pause about deploying GenAI, and general technophobia mean that most people globally have still not tried it. Even among the world’s billion knowledge workers, just 10% use ChatGPT (which enjoys 60% market share) regularly.

Hope you folks have a great weekend, and your feedback will be much appreciated.

Download the Mint app and read premium stories
Google Play Store App Store
Livemint.com | Privacy Policy | Contact us You received this email because you signed up for HT newsletters or because it is included in your subscription. Copyright © HT Digital Streams. All Rights Reserved