New Delhi: Smartphone makers are in a perpetual race to attract photography enthusiasts by adding more cameras to their phones. It was HTC that started the trend way back in 2011 when it introduced the first dual-camera smartphone. Seven years later, Huawei launched its P20 Pro with three cameras. Now, Samsung has gone a step further with the new Galaxy A9, which comes with four cameras.

To be sure, more cameras do offer some tangible benefits and additional features, though they come at a price. Huawei’s P20 Pro, for instance, has a 40-megapixel (MP) camera with an RGB (red, green and blue) sensor, a 20MP camera with monochrome (black and white) sensor, and an 8MP camera with a 3x optical zoom lens. While this allows optical zoom on cameras, which is noticeably better than regular digital zoom functions, it also adds more detail. Monochrome sensors focus only on black and white colours, and capture more detail in images. When blended with the colour photos clicked from the primary 40MP sensor, images with greater detail are produced.

Samsung’s Galaxy A9 sports a 24MP camera with an RGB sensor, a 10MP camera with a 2x optical zoom lens, an 8MP wide-angle camera with 120 degree field of view, and a 5MP camera to capture depth information. Phone makers augment the addition of such hardware with machine learning algorithms—often promoted as Artificial Intelligence-powered cameras—to produce better images.

Yet, while smartphone cameras have become better, and multiple sensors and lenses allow manufacturers to add features, do multiple cameras necessarily improve image quality over what dual cameras offered?

According to Wally Yang, senior marketing director (consumer business group) at Huawei, the addition of multiple cameras lets the camera process images differently. The Mate 20 Pro, he explains, can process subjects in real time and add black and white, and colour effects. Further, since the smartphone camera is about more than just taking pictures, adding additional sensors and lenses makes them future-ready. By using its multiple sensors, the Mate 20 Pro can create 3D models of things around you, says Wally.

Smartphones have undoubtedly displaced point-and-shoot cameras. But they have some distance to cover before they dislodge digital single-lens reflex (DSLR) cameras used by professional photographers.

Delhi-based wedding, documentary and travel photographer Vijay Tonk says multiple cameras will enhance the options available to users who do not know how to use DSLRs and rely primarily on their phones for all their photo needs. He adds multiple sensors and lenses allow regular users to use various features that would otherwise be limited to the DSLR segment, though without providing the same levels of detail. Moreover, while smartphones may satisfy a user’s needs these days, for professional photography, there are still—and most likely will be for a long time—physical barriers which can’t be simply bypassed, according Marius Eschweiler, global director of business development at Leica Camera AG. “It all comes down to light and how efficiently a device can capture it, like their small sensor compact camera counter parts—which they’ve almost completely replaced—smartphones hit these barriers, too," he adds.

Meanwhile, technological developments are promising to help phone makers bundle additional cameras in smartphones, or even multiple lenses in cameras. Computational photography, for instance, allows smartphones to augment the lack of space and DSLR-class sensors with software. Researchers at Columbia University, New York, have experimented with high resolution sensors on smartphones, and surmised that computational photography could, theoretically, overcome the space constraints in smartphones (nyti.ms/2DR2f4f).

A case in point is Rambus Labs, which recently developed a “lenseless" sensor. Google, on its part, is leveraging software to offer many of the capabilities of multiple camera phones using just a single camera. Its Super Res Zoom, available on the Pixel 3 and Pixel 3 XL, uses HDR+ to capture multiple frames (before the user presses the shutter) of the same scene from different positions and then uses AI to break them into smaller components, filtering out the blurry ones from each frame and generating a higher resolution image. 

Google claims that the photos taken with Super Res Zoom are equivalent to the output from smartphone cameras with 2x optical zoom. Similarly, to generate portrait shots with greater depth of field, it uses the dual pixel autofocus feature of the sensor, which captures two shots from the left and right and then merges them to create a perception of depth. However, while real computational photography is still a long way away, the initial steps are already being taken by companies.

Close