With neural networks on your phone, the device learns from the data you create.
With neural networks on your phone, the device learns from the data you create.

Heralding the future of mobile computing as phone makers bet on artificial intelligence

While voice assistants like Siri, Google Assistant, Bixby and Amazon's Alexa have been available on phones for a while, the trend of on-device AI on smartphones is gathering momentum with the launch of the Google Pixel 3 and Huawei Mate 20 Pro

New Delhi: Your smartphone will only become smarter with phone makers pumping more artificial intelligence, or AI, into them. While voice assistants like Siri, Google Assistant, Bixby and Amazon’s Alexa have been available on phones for a while, the trend of on-device AI on smartphones is gathering momentum with the launch of the Google Pixel 3 and Huawei Mate 20 Pro. In fact, a June report by Strategy Analytics, predicts that 80% of all smartphones will be AI-powered by 2023.

But what exactly is an on-device AI? It means that artificial neural networks—deep learning algorithms that are modelled on the neurons in the human brain—reside in your phone. “In mobile, you’re limited by battery, network connectivity, privacy, etc. So, you can’t keep going back and forth (between the cloud and device)," explains Nanda Ramachandran, global director, Pixel Products at Google.

ALSO READ: Huawei Mate 20 Pro vs Apple iPhone XS Max vs Samsung Galaxy Note 9: No, I’m better than you

With neural networks on your phone, the device learns from the data you create. This allows for better data privacy too since the data does not have to be shared with a remote cloud server. Computational photography in smartphones is another area that uses AI and computer vision to enhance images—the iPhone XS and Pixel 3 being cases in point. Unlike traditional cameras, computational photography focuses on creating a picture that does not necessarily exist in front of you, meaning the photo may not be the exact scene your eyes see but still looks good aesthetically. US-based Rambus Labs says it has a lensless smart sensor—less than 1mm thick, which would fit in thin phones.

ALSO READ: What to consider before you buy the new iPhone XS, XS Max

Smart battery: One of the most common problems users have with smartphones is that their batteries deteriorate over time. However, Google says that its net promoter score (NPS) for the Pixel’s battery life indicates that it’s increasing. The company attributes this to an adaptive battery—a machine learning (ML) driven technique that allows Pixel phones to prioritise battery power for apps you use more. Similarly, Google introduced a smart compose—when you’re writing an email using Gmail on the Pixel 3, the app will automatically start finishing your sentences. When you write ‘my address is’, the phone taps into Google Maps to find your home address.

To be sure, all these AI elements often require new hardware. This is the reason why Google put its own Visual Core chipset inside the Pixel phones. In fact, according to Mistry, the Top Shots feature on the Pixel 3 won’t be available on a Pixel 2, simply because the version of the Visual Core processor on last year’s phone doesn’t have the computational power required for it.

ALSO READ: First impressions: Google Pixel 3 XL doesn’t jump off the page

Similarly, Apple talked recently about its new neural engine on the 2018 iPhones, which can perform 5 trillion operations per second. Both Qualcomm and Huawei have put neural processing engines on their newest chipsets, to handle ML operations. It’s not that ML algorithms can’t be run on current processors. However, having a separate co-processor allows better and more efficient output.

Close