Advertisement

Interview: Google sheds light on the “software-defined camera”

The camera plays a critical role in the high-end smartphone sector. Google has made great progress in recent years and delivered enormously good results in the Pixel and Pixel 2. We had the opportunity to talk to Isaac Reynolds, product manager for the camera in the Pixel 2. He provided us with some insights into the camera’s particular philosophy.

There’s a lot of discussion about smartphone cameras. This isn’t just about the quality of the images, but also about how the images are produced. Manual settings or the fast automatic mode? This inevitably leads to the question of what the relationship is between a smartphone camera and an expensive SLR camera. So what is Google’s stance on these dilemmas?

Google18 5025
Isaac Reynolds, product manager for the camera in the Pixel 2. / © Ulrich Schaarschmidt/Google LLC

Isaac Reynolds has a clear view of this discussion: typically, the camera is seen primarily from a hardware perspective. “But historically Google is more of a software company,” he told us. With the Google Pixel, this manifests itself in the camera, among other aspects. Of course, the hardware is still significant, but Google mainly relies on the software. And here Reynolds is referring in particular to the HDR+ technology, which Google has been working on for six years now. In a sense, the Pixel camera is a “software-defined camera”, i.e. it is essentially software-oriented.

HDR+ makes the difference

HDR+ doesn’t work like a classic HDR mode, which combines several images with different exposure settings into one. HDR+ uses up to ten underexposed images shot with the same exposure setting, and then the HDR+ algorithm combines these into one. Through this method, the Pixel camera can achieve long exposure times without actually using a long exposure time image. Various techniques are used to ensure that the images remain sharp.

A manual mode is therefore difficult to implement for the Pixel because it would eliminate the HDR+ functionality: HDR+ processes up to ten images that Google could make available as RAW files, but there would be no software that could process them into an image. Google has therefore decided against RAW functionalities in the Google camera.

AndroidPIt google pixel XL 9805
The Pixel camera is known for HDR+ (first generation Pixel pictured here). / © AndroidPIT

Software is also used for zooming to ensure better results. This includes RaisR, an algorithm that helps to improve the details in situations with high contrast. This works particularly well with letters.

Google has also placed a great emphasis on portrait mode. Although it simulates the optical bokeh effect of a camera, Google does it in their own way. For example, it always focuses on people, even when they’re not in a plane of focus. Google combines two technologies. First, its estimation uses two images from different angles. This is possible thanks to the dual pixel sensor, which has sufficient minimal displacement. Machine learning is also used to better match the contours of people.

Google: Point and shoot is the goal

In general, the concept of the Google camera is above all to provide a simple photo solution. Reynolds is currently rejecting elaborate modes with object recognition. The team has modern media formats such as AV1 on screen for the Google camera, but the support needs to continue to grow. A Pixel user doesn’t have to worry about the storage space anyway, because the pictures are synchronized via the cloud to Google Photos. But that also means that for Google, Google Photos and the Google camera belong together.

Google18 4918
Point and shoot: The motto for the Google camera. / © Ulrich Schaarschmidt/Google LLC

Pixel Visual Core: Google doesn’t use the chip

Reynolds also brought clarification to an issue that had previously been contradictorily communicated by Google. Because Pixel 2 contains the Pixel Visual Core, a processor that can actually accelerate machine learning apps. At first it was unused, and in fact, the Google camera doesn’t use it until today.

The Visual Core is currently only used for third-party software for which the Pixel Visual Core provides HDR+ functions. Reynolds said that there are reasons for this. Google had wanted to give app developers easy access to HDR+, but for their own camera app they needed more complex functions that cannot currently be implemented using the Visual Core. In the future, however, it is conceivable that Google will also address the Visual Core.

The duality of the dual camera

Dual cameras are the trend of the hour and hardly any high-end smartphone can do without one. There is, however, no inevitable effect on the image quality. Reynolds points out that a dual camera is associated with costs that result in higher smartphone prices, so buyers incur higher costs. Two image sensors cost more than one and dual cameras require more resources in memory. A dual camera therefore requires compromises that Google apparently didn’t want to make with the Pixel 2 XL.

We are very happy for your visiting to this web page Interview: Google sheds light on the “software-defined camera”. We hope the contents of this article can give more information to you.

Thanks…!!! 🙂

The sources of this post from: https://www.androidpit.com/google-pixel-the-software-defined-camera