Without any doubt, in the last few years Google has paid particular attention to the photographic sector and the Portrait Mode of Google Pixel 2 has been much appreciated by users for the quality it can guarantee.
The Mountain View giant is aware of these great results and has continued to refine this solution for the Pixel 3 and in the past few hours has decided to unveil some details on the work carried out to get to the current levels.
In order to achieve this, Google used the neural network and also a particular accessory which is a case designed to hold 5 smartphones at the same time.
Thanks to this accessory, the developers have been able to perfect the focusing system and the one related to depth recognition, capturing photos from five corners at the same time and exceeding the limits that derive from the use of only one smartphone at a time.
Thanks to the photos and data captured with this special case, Google was able to train the neural network to correctly process the information and achieve even greater accuracy in generating depth maps in Portrait Mode.
In Google Camera App version 6.1 and later, Google depth maps are embedded in Portrait Mode images. Meaning, you can use the Google Photos depth editor to change the amount of blur and the focus point after capture.