Google’s Pixel 7 camera doesn’t capture reality, it creates a whole new one
Whose reality is it, anyway?
Google knows the truth: pictures aren’t about capturing a moment, they’re about capturing an idea.
It’s never stated that philosophy explicitly, but then again, it hasn’t had to. The proof is in the pudding, and by the pudding, we mean the Pixel 7 and Pixel 7 Pro.
This year — as has been the case in years past — Google unleashed its incomparable knowledge of AI on the Pixel phones’ cameras, giving us a few new jaw-dropping applications of computational photography magic.
That know-how manifested in the Pixel 7 with features like Cinematic Blur, an autofocus feature that de-emphasizes the background and keeps your subject in focus, or Guided Frame, an accessibility feature that uses audio cues to help sightless users take selfies.
My personal favorite is Google’s new Photo Unblur, which can actually touch up old photos that aren’t in focus. It’s all the glory of Photoshop, without any of the required expertise.
This AI superpower also extends to new photos as well: the Pixel 7 and 7 Pro use information from the main and ultrawide cameras in concert, taking images from both sensors and meshing them together to give you a (hopefully superior) photo stew. If all goes as planned, pictures should be sharper and less apt to end up in your memory graveyard. Neat.
While computational photography isn’t new — it’s why Apple got away with retaining the same megapixel camera in its iPhone for years — Google has been far and away the most enthusiastic of all smartphone companies when it comes to pushing the boundaries. And if this year’s Pixel generation is any indication, Google has no intention of slowing that roll.
The beyond
The quality of smartphone cameras has exploded over the last decade, and as pressure to keep that train moving in the right direction mounts (new phones need better cameras to justify yet another generation of hardware), tech giants like Apple and Google have turned inward to develop their own vertically integrated chips: the Bionic and Tensor, respectively.
Both are specifically designed to advance mobile machine learning, but differ in their implementation. While Apple focuses its Bionic chip more on background features that brighten, colorize, and stabilize photos or video in key moments, Google sometimes brings computational photography to the forefront.
Features like Magic Eraser, for example, imbue Photoshop-level editing into smartphone photography, allowing users to scrub undesired subjects from their pictures. That level of reality retcon may not be novel in the world of software, but for a smartphone, it’s a significant step.
Similarly, features like Action Pan are designed to convey motion, giving photos snapped of moving subjects a blurred background, while retaining the focus on a chosen protagonist. The results are fun to witness; was I going 5 mph or 50 in this Action Pan picture below? I’ll never tell.
Some of the features are fun, some are practical, but all of them move toward the same direction; one where AI, not the photographer, is at the steering wheel. As chips and capabilities continue to excel, so too will their breadth and influence. What will the next generation bring? It’s hard to say, but my guess is more.
Twisting the dial
Not everything is about AI fabrication. Google has also trained its AI sights on making photos more “realistic,” as opposed to entirely new versions of reality. Real Tone is a Pixel photo feature meant to more accurately render darker skin tones, especially in low-light conditions (whether that achieves its goal, or not, is an entirely different conversation).
And while the goal is noble (people of color are often on the wrong side of AI improvements), it’s the inexplicit applications that could be the most insidious. When Photo Unblur intervenes to sharpen a picture of your significant other, or your kid’s birthday party, AI operates entirely behind the scenes; we never willingly crack open the edit toolbox and choose to apply a creative vision.
What if the faces of the loved ones in your pictures are more recreation than reality?
And if our pictures are sharper and better suited for immediate gratification on social media, it’s hard to argue. But what happens when the goal is no longer sharpening a picture. What if the faces of the loved ones in your pictures are more recreation than reality?
In a world where marvels like Dall-E exist, the role of AI in creating our reality is more in question than ever, and if we’re inviting the ability to fabricate photos into a realm as personal as our smartphones, the two are bound to butt heads.
If it sounds like I’m trying to make computational photography problematic, let me be clear: There is no scandal here. People have been editing digital photos for a long time, and Google’s AI won’t change that. But as the dial twists, it’s worth taking a step back to ask whose reality you’re capturing. Yours? Or the one that happens inside your phone’s black box.
Inverse may receive a portion of sales if you purchase a product through a link in this article.