AI-powered smartphone cameras with multiple lenses make images sparkle
Phone makers are turning to artificial intelligence (AI) and multiple lenses to deliver a DSLR (digital single-lens reflex)-like experience on smartphones
Cameras have become the most talked-about feature in smartphones and phone makers are turning to artificial intelligence (AI) and multiple lenses to deliver a DSLR (digital single-lens reflex)-like experience on smartphones.
Multiple cameras, dedicated camera chip
Dual cameras have replaced single ones, offering the option to enhance depth of field, add more optical zoom, capture wide-angle shots and true monochrome photos. Huawei’s P20 Pro goes a step further, offering three rear cameras comprising a 40-megapixel camera with RGB (red, green and blue) sensor, a 20-megapixel camera with monochrome sensor, and an 8-megapixel camera with 3x optical zoom. It also offers a composite 5x hybrid zoom which combines shots from all three cameras. Camera company Light is building a smartphone with nine cameras in a bid to capture multiple pictures at different focal lengths and depths, and then use an algorithm to stitch them together.
Integrating a separate camera chip and RAM is another trend that is expected to grow in the coming days. Google’s Pixel 2 and Pixel 2XL use a separate camera processor called Pixel Visual Core, which can process HDR+ images five times faster than phones that rely only on the central processor unit. Sony has developed a three-layer CMOS (complimentary metal oxide semiconductor) sensor with a dedicated DRAM (dynamic random access memory) to speed up the camera sensors, resulting in faster processing of high-resolution photos.
In 2019, users can expect smartphones with more powerful camera sensors such as Sony’s new IMX586 CMOS sensor that can shoot photos at 48-megapixels, capturing a lot more detail. The recently-launched Sony Xperia XZ2 can capture videos in 4K HDR—for richer colours and contrast.
Better algorithms and AI magic
Google uses an algorithm that captures multiple shots of the same object or scene and filters out the blurry ones. Then it combines the good ones to create a properly lit image. The algorithm has been trained on a data set comprising millions of photos. The camera also uses machine learning (a subset of AI) to predict which areas should stay sharp in the photo and which need to be blurred. To make the blur look realistic, it needs to know the distance between the objects.
Most phones would use dual cameras to accomplish this but Google Pixel 2 and Pixel 2 XL perform the task with a single camera using a feature called dense dual-pixel autofocus. It allows the camera to take two pictures simultaneously—one from the left side of the lens and the other from the right, giving the camera depth perception just like your own two eyes, and generating a depth map of objects in the image from that, according to Isaac Reynolds, Pixel Camera’s product manager. LG’s G7 ThinQ also uses an image recognition engine that uses an artificial neural network (imitates the neurons in the brain) trained on 100 million images.
Phone makers are also collaborating with camera companies to improve the camera software. Chinese phone maker Huawei recently roped in German camera maker Leica to optimize the camera performance in its flagship P20 Pro smartphone. Asus ZenFone 5Z uses an AI-powered scene mode that can automatically select a scene, based on user preferences, and then adjust the settings accordingly for optimized results so users won’t have to tinker with the camera settings for every new scenario. Samsung Galaxy S9 uses a multi-frame noise-reduction system that captures 12 photographs quickly and combines them together for better low-light shots.
Unlock a world of Benefits! From insightful newsletters to real-time stock tracking, breaking news and a personalized newsfeed – it's all here, just a click away! Login Now!