DO MY PIXELS DECEIVE ME?

PIXELS DECIEVE.jpg

What happens when a portable image-making machine meets advanced artificial intelligence and computation? An ethical mess.

Just a few weeks back, Google debuted their new flagship phone – the Google Pixel 3. On top of being a snappy, pure Android experience, one of the big reasons that the Pixel line has stood out is their stunning camera system. Regularly touted as the best smartphone camera around, the Pixel has completely blown me away with the kinds of images such a simple, proficient imaging system can produce. Here are just a few shots from my Pixel 2 XL that I’ve captured the last few months (since my Galaxy S6 went for a swim; RIP)

Part of what makes the Pixel’s camera so advanced is an AI back-end.

No. Not that kind of robotic back-end.

No. Not that kind of robotic back-end.

On top of producing a “portrait” blurring effect with a single lens (front- and rear-facing), the Pixel camera is constantly using machine learning and computation tech to improve the quality of images over time.

Way cool, right? Well… introducing Artificial Intelligence into photography starts to pose some ethical questions.

Cut to a few weeks ago and the Pixel 3: Goole touts a new camera feature (on that phone and older Pixel’s) – Night Sight.

Put simply on Google’s site: “Take Better Night Pics: Night Sight brings out the best details and colors that get lost in the dark.” I suspect the Spidey-Sense of a lot of photographers out there is tingling reading that sentence. Especially given that the lens and sensor system on the Pixel(s) didn’t get any substantial upgrade. Like most of us learned in Photo 101, you can only truly control three factors to make an exposure – Shutter Speed (How long the sensor or film “sees” the scene), Aperture (How much light is transmitted through the lens to the sensor/film), and ISO (Digital or analog sensitivity to that transmitted light). Even knowing that triangle of constraint...It’s undeniable, Night Sight on the Pixel is impressive.

But how the hell does Google do it? Well that’s where the Robots come in. [Note: There’s a lot of technology at play here, but I’ll break down some of the key parts and drop a quick video here that explains in-depth what’s going on]

Essentially, Night Sight combines image-stacking of varying exposure lengths, photo-merging to reduce sensor noise, AI to “decide” what’s blurry (and remove it from the final image), and some clever color temperature and tone mapping adjustments to “match” closer to what our human eyes see.

But...is it photography? That’s a damn good question. Even since the early days of compositing, panoramas, and then High-Dynamic Range imaging, the collective “we” in the photography world have asked ourselves whether a number of photos – stacked together in one fashion or another – still captures the essence of “writing with light” that is photo graphy.

I tend to fall somewhere on the “purist”, photojournalistic side – Sure, images can be manipulated for particular purposes, but the second you add/remove/clone or alter elements of the scene, it is no longer a true photo, but an illustration of piece of digital art. It does not diminish the value of the work, but it does change its authenticity and objective ‘truth’ – which I place a damn high value on in today’s times.

So here comes Pixel’s Night Sight, bringing Computational Photography to the masses (without a hint of transparency aside from a semi-obscure Google AI Blog post). We need to reckon with the fact that these images, as stunning and mind-bending as they are, are not pure photography — The algorithm is literally stitching the elements of up to fifteen photos together based on what it “sees” as blurred, noisy, black, white, foreground or background.

At the end of the day, I for one welcome our robot overlords – BUT – Let’s call them out for what they are: Cleverly-written Algorithms, not Creative Visionaries.

Keep seeing, keep shooting, and keep putting your best Photo Forward.

Cheers,
-Ben