Look at Google’s best camera and how computational photography is the next BIG…

Why Google thinks computational photography is the future

The verdict is in on Google’s impressive new Pixel and Pixel XL phones, and one of the bright spots is the camera. In an in-depth review Dieter Bohn says “if you wanted to agree with Google and call this the best smartphone camera, I wouldn’t argue with you.”

“The results on the Pixel are very, very good,” says Dieter. “I put it in the same ballpark as the iPhone 7 and the Galaxy S7 in most situations, which is not something I expected to say going in.”

Clearly, this is by far the most competitive Google has ever been in mobile photography. But the Pixel phones, on paper, don’t have cutting-edge hardware, relying on an f/2.0 lens without optical image stabilization. Instead, and in typical Google fashion, Google has turned to complex software smarts in order to power the Pixel camera. I spoke with Marc Levoy, a renowned computer graphics researcher who now leads a computational photography team at Google Research, about how software helps make the Pixel camera as good as it is.

“I CAN’T THINK OF ANY REASON TO SWITCH HDR+ OFF.”

Levoy’s team has worked on projects as diverse as the 360-degree Jump camera rig for VR and burst mode photography for Google Glass. On the Pixel, the most prominent place you’ll see its work is in the HDR+ mode that has been deployed on Nexus devices over the past few years. Apple popularized mobile HDR, or high dynamic range photography, back in 2010 with the iPhone 4, but Google’s approach differs dramatically in both implementation and technique.

For one thing, you’re supposed to leave it on all the time, and it’s switched on by default. “I never switch it off,” says Levoy. “I can’t think of any reason to switch it off.” You can, of course, and there’s another slightly higher quality mode called HDR On that works similarly to previous Nexus phones, which is to say slowly. But for general photography, Google thinks you should be using HDR+ for each shot.

READ THIS TOO:  Sony excited about Xperia XZ

This no-compromise approach to HDR photography has partly been made possible by new hardware. The Hexagon digital signal processor in Qualcomm’s Snapdragon 821 chip gives Google the bandwidth to capture RAW imagery with zero shutter lag from a continuous stream that starts as soon as you open the app. “The moment you press the shutter it’s not actually taking a shot — it already took the shot,” says Levoy. “It took lots of shots! What happens when you press the shutter button is it just marks the time when you pressed it, uses the images it’s already captured, and combines them together.”

It’s a major usability improvement on the HDR+ mode in last year’s Nexus 6P and 5X. “What used to happen last year is you’d press the shutter button and you’d get this little circle going around while it captured the images you need for the burst; now it’s already captured those,” says Levoy. “And that’s big, because that means that you can capture the moment you want.”
Though Google has certainly made massive strides in speed, based on our testing of the Pixel we’re not sure we’d agree that we’d never want to turn HDR+ off. It does generally produce great results, but there has been the odd image like this that reminds us a little too much of mid-2000s Flickr shots with overzealous HDR processing — check the unusual colors in the sky around the edge of the building. These examples are rare, however, and Google’s atypical approach manages to avoid many of the pitfalls of conventional HDR imagery on phones.

The traditional way to produce an HDR image is to bracket: you take the same image multiple times while exposing different parts of the scene, which lets you merge the shots together to create a final photograph where nothing is too blown-out or noisy. Google’s method is very different — HDR+ also takes multiple images at once, but they’re all underexposed. This preserves highlights, but what about the noise in the shadows? Just leave it to math.

READ THIS TOO:  AMAZING! You can now CAST SPELLS on your Andriod Devices

“Mathematically speaking, take a picture of a shadowed area — it’s got the right color, it’s just very noisy because not many photons landed in those pixels,” says Levoy. “But the way the mathematics works, if I take nine shots, the noise will go down by a factor of three — by the square root of the number of shots that I take. And so just taking more shots will make that shot look fine. Maybe it’s still dark, maybe I want to boost it with tone mapping, but it won’t be noisy.” Why take this approach? It makes it easier to align the shots without leaving artifacts of the merge, according to Levoy. “One of the design principles we wanted to adhere to was no. ghosts. ever.” he says, pausing between each word for emphasis. “Every shot looks the same except for object motion. Nothing is blown out in one shot and not in the other, nothing is noisier in one shot and not in the other. That makes alignment really robust.”

Click Next below to Continue Google Camera Review

Use your ← → (arrow) keys to browse

Please Share with others:

FacebookTwitterGoogleTumblrPinterestDigg


Leave a Reply

Connect with:



Your email address will not be published. Required fields are marked *