Home >Ai >Artificial-intelligence >CPUs vs GPUs: Which chips will give firms the AI edge?
A holographic avatar, Ella, listening to composer Kevin Doucette playing notes on his synthesizer, at the Intel AI Devcon 2018 in Bengaluru.
A holographic avatar, Ella, listening to composer Kevin Doucette playing notes on his synthesizer, at the Intel AI Devcon 2018 in Bengaluru.

CPUs vs GPUs: Which chips will give firms the AI edge?

Both central processing units (CPUs) and graphics processing units (GPUs) are equally competent and can produce desired results with respect to AI, say experts

Mumbai: Early this month at the Intel AI Devcon 2018 in Bengaluru, a holographic avatar called Ella listened intently to composer Kevin Doucette playing notes on his synthesizer. When he paused, she began composing her own notes, complementing his music in real-time.

How did this happen? Ella was learning about features such as tempo, scale and pitch from the music data that was being sent in real-time to an Intel Movidius Neural Compute Stick. Intel used a class of artificial neural networks, the recurrent neural network or RNN that depends on previous calculations to work on current ones, to perform this artificial intelligence (AI) task.

This Neural Compute Stick is simply a case in point that Intel—a company which most people identify with central processing units (CPUs) inside personal computers (PCs), mobiles and servers—is widening its portfolio to stay in the AI race that has strong contenders including Nvidia, Microsoft, Google, Facebook, IBM, Amazon, Apple, Alibaba and Baidu.

At stake is an AI-focused hardware, software and services global market that is predicted to touch $58 billion market by 2021, according to International Data Corp. (IDC).

“Software is a critical component of our AI strategy," insists Amir Khosrowshahi, vice-president and chief technology officer of AI Products Group (AIPG) at Intel. As part of its AI strategy, Intel offers its Xeon chip for data centres, Movidius for embedded vision and MobileEye for the automotive sector. It also has programmable chips called field programmable gate arrays, or FPGAs.

Microsoft’s hardware architecture called Project Brainwave, for instance, is deployed on Intel’s FPGA to help it make real-time AI calculations. Philips, on its part, uses Xeon Scalable processors to handle its AI workloads.

Intel also funds SETI@home, a programme that aims at detecting intelligent life outside Earth. “We are using AI techniques to augment the search," says Khosrowshahi. Intel funding research in Connectomics—study of connectomes that are a map of neural connections in the brain—too, and is working with some Harvard and Massachusetts Institute of Technology (MIT) labs by providing them the computing power to slice the brain and do imaging at the nanoscale level, which generates a lot of data. At nanoscale resolution, for instance, the neocortex of a rat would generate about an exabyte of image data (about 250 million DVDs worth of information). This requires a lot of computing power.

Intel’s progress on the AI path, however, is no smooth ride. There is fierce competition for the AI market, especially from Nvidia that designs graphics processing units (GPUs), which power many AI applications. The Airbus Group, for instance, is working on GPU-powered autonomous air taxis christened Vahana. Google has its own tensor processing unit (TPU) chip even as it collaborates with Nvidia that uses Google’s deep learning framework called TensorFlow. Further, public cloud services providers such as Alibaba Group Holdings Ltd, Amazon Web Services, Baidu Inc., Facebook, Google, IBM, Microsoft and Tencent Holdings Ltd use Nvidia GPUs in their data centres.

Yet, Khosrowshahi believes “there is no firm (like Intel) that has such a holistic approach to AI—all the way from silicon substrate to fabs where we can build our own test chips, can exploit different architectures, etc."

He also argues that GPUs are not necessarily better than CPUs when it comes to AI. “I have been in the business of AI for 10 years. There is a lot of potential—your Siri is going to get better, autonomous driving will get better—there are many possible futures and we need all these tools (CPUs, GPUs, TPUs, etc.). There is no one tool that will be the winner. GPUs are somewhere between CPUs and ASIC (Application-Specific Integrated Circuit). CPU can run all workloads. But GPUs are limited in memory. We are building domain-specific Nervana neural network processors (NNPs). There is a strong business need for it."

Subscribe to Mint Newsletters
* Enter a valid email
* Thank you for subscribing to our newsletter.

Click here to read the Mint ePapermint is now on Telegram. Join mint channel in your Telegram and stay updated

My Reads Redeem a Gift Card Logout