Photo: Reuters
Photo: Reuters

What does Apple’s new Deep Fusion update for iPhone 11 series mean for you

  • Deep Fusion is only applicable to the cameras on the iPhone 11, iPhone 11 Pro and 11 Pro Max
  • The update adds new computational photography mechanisms to the iPhone’s camera, improving detail, texture and colour

If you own an iPhone 11, 11 Pro or Pro Max, your phone may have prompted you for the iOS 13.2 update today. While the update brings support for Apple’s new Airpods Pro audio device, it also brings the new Deep Fusion camera, which uses the company’s Neural Engine inside the company’s A13 Bionic chipset to improve the phone’s camera imaging prowess.

Deep Fusion is what Apple called “computational photography mad science" at its keynote a few months ago. The feature will be applicable only to the 2X telephoto lens and the wide angle lens of the iPhone 11 series and uses algorithms to enhance the overall image quality you get from the phone.

Unlike the Night Mode, this new feature isn’t controlled by the user. According to The Verge, the feature kicks in when you’re shooting in very low light using the wide angle camera, while the tele lens will use a mixture of Smart HDR and Deep Fusion, the two computational photography algorithms that Apple puts on its iPhones.

With this new algorithm, the camera on the iPhone 11 series will now take 9 photos in total without you actually pressing the shutter button as many times.

This begins with four fast shutter speed shots and four standard shots which the camera clicks by itself even before you press the shutter button. This happens in the background, without you actually seeing it. The fast shutter speed shots allow the camera to remove motion blur from images, i.e, it freezes subjects in the scene.

When the shutter button is pressed, the camera then takes a long exposure photo, combining it with the standard frames it clicked earlier, to create a “synthetic long". It then combines this frame with the sharpest frame it got in the low shutter speed shots and progresses to analyse the resulting photo on a pixel level.

The final photo you see is a combination of the nine images the camera took, providing for motion, detail, light, textures and more. We have only just updated our iPhones to iOS 13.2, but initial samples we took does suggest that there’s a noticeable change. However, we were able to tell the same because we had the opportunity to take the same photos pre and post the Deep Fusion update.

It seems Apple’s focus here is in improving texture and detail in images. This, theoretically, results in better edge detection of subjects, higher intensity of colours and of course, more detail in images. Also, the different exposures should help the camera deal with light better, resulting in more dramatic (if not always accurate) representation of how light falls on various areas.

Deep Fusion is certainly a step up for the iPhone 11 series’ camera, but like Google’s Pixel cameras, this is a step towards the future. Computational photography could theoretically do away with the space crisis in phones, allowing higher resolution sensors to be fitted in pocket devices. With Deep Fusion, apple is giving us a glimpse into how phone cameras will work in future.

Close