I want to take the photos the way I want, but the camera on my phone is so smart that it won’t let me

  • 4

My mobile camera is too smart. When I take a photo, the result is not exactly what I was looking for. Is better. Or maybe not.

Our smartphones have advanced on many fronts, but it is clear that sensors and especially computational photography have achieved exceptional results that however they can pose a problem: not reflecting reality as we wanted it to be reflected.

For our mobile “every photo is a problem to solve”

Art photographer Grergor Gentert uses his state-of-the-art iPhone to take all sorts of photos, but has found that the results are starting to not be what I expected. This user shared with other advanced photographers his experience with iPhone cameras at The New Yorkerand all agreed that something strange.

“I’ve tried taking photos with the iPhone when the light gets bluer at the end of the day, but the iPhone tries to correct that kind of thing“Certain tones that exist in the sky are edited and processed and eventually removed because the hue, for whatever it is, is not acceptable to Apple’s computational photography algorithms. As Gentert said,

“The iPhone sees the things I’m trying to photograph as a problem to be solved.”

It is a good way to understand what the cameras of iPhones and any current smartphone are like. In their obsession with giving the user the best possible experience, these cameras have ended up performing a series of post-processing tasks that are all aimed at one thing: so you don’t have to worry about anything more than shooting.

It is the latest expression of that range of compact “point & shoot” cameras that were successful before the arrival of smartphones because they precisely offered a photographic experience perfect for those who did not want to complicate life.

That philosophy has been taken to extremes in today’s smartphones. Manufacturers often integrate sensors with mammoth resolutions who then take photos at a lower resolution thanks to pixel binning.

Both on iPhones and Android phones happens than when choosing the 16:9 aspect ratio for photos what the mobile actually does is take the photo in 4:3 as usual and then cut it directly without saying anything to you. And if you want it in 1:1, the same: the photo is taken in 4:3 and then cropped so that the result is as desired.

Not to mention the magic of the HDR modes with which it is possible capture the light and dark in the same photo with spectacular precision or the methods that, for example, Google used to zoom when combining multiple RAW photos to deliver the end result.

The Google Pixels they were pioneers in this: With the Pixel 4 and 4 XL we saw an important advance in computational photography that, for example, allowed us to access the Live HDR + mode or the striking astrophotography modes. The thing has continued with the Pixel 6 and its magic eraser or its movement mode, and that application of portrait modes and artificial bokeh that we like so much has conquered both normal photos and even videos in “cinema mode” that they already offer both the new iPhone and other mobile phones on the market.

Google Tensor is Google's starting signal in mobile chips: the focus is artificial intelligence, not winning in benchmarks

The thing is going to limits that are already bordering on the absurd, because the manufacturers are deciding that performance improvements in mobile chips focus on those artificial intelligence and deep learning algorithms with which that “magic” of our mobile cameras is done. OPPO’s Google Tensor and Marisilicon X are designed for this.

At Apple they are also clear: a couple of years ago they bought the company Spectral Edge precisely to strengthen their computational photography. For years, the company has been applying the advantages of machine learning to solve in real time not only white balance, colorimetry or exposure adjustments, but also to refine the light or recognize people’s faces what we photograph

Computational

When we take a photo on our cell phones, we are not taking a photo, but several that are combined to achieve the end result. Apple has its Deep Fusion technology, and Google – like other Android mobile manufacturers – has been doing the same for a long time. The technique, of course, works, and makes us capable of capturing details that the cameras of the past (whether mobile or not) were unable to capture.

That algorithm is stealing my photo

The problem is that in this processing things can be lost that we did want to reflect and that algorithms “steal from us”. Certain shades of the sky, certain details of an object, for example. Not to mention when grotesque things happen like the one whose face ended up being seen as a leaf by the camera—or rather, the algorithms—of the iPhone.

Those responsible for the Lux photographic application for iPhones complained precisely about how the iPhone 13 cameras had become too smart in the last times.

In a fantastic and deep analysis found that often an option users had to access less “hyper-realistic” photos is to go to RAW shooting mode to bypass iOS algorithms.

Screenshot 2022 03 21 At 12 35 56

iPhone telephoto shots struggle with lack of light: noise is everywhere in this RAW shot at sunset, but on the right the magic is happening. At least in some places, because in others (such as the contours and detail of the palm tree) detail is lost at the cost of “softening” the image and making noise disappear. Source: Halide.

It seems like a good option. for those looking for the “unadulterated” take of the photos that our mobiles take. Android manufacturers also allow it on some mobiles, and of course it is a way to avoid the use of post-processing.

Sometimes, though, even those RAW shots are doctored, too, and a common complaint among those who use Apple’s ProRaw format is that it “washes out” the image to remove noise. Thus, even the manufacturers’ native RAW formats may not be enough, and fortunately there we have third-party applications to obtain pure RAW.

Doing so has its consequences because the sensors of our mobiles go as far as they go: that is where the power of these algorithms is noticeable, and where for many the end justifies the means.

Manufacturers know this, and everyone has their own interpretation of reality when taking photos. We see it in the way skin tones are reflected, for example: Google has really polished the treatment of the skin of black people in the Pixel 6.

It is just a sample of that way of understanding photography that each manufacturer has. This is noticeable in more or less bright shots, with certain predominant tones. or with a particular saturation which may not have much to do with the actual scene but certainly makes the photo look more lively and cheerful.

Fortunately, users can always edit that photo to adjust it to what we wanted, but the truth is that cameras have become especially smart. It does not seem like a bad idea in view of the results —we take more and better photos and videos than ever—, but it is important to keep it in mind to know that the image we have captured may not be an exact reflection of what we wanted to capture.

My mobile camera is too smart. When I take a photo, the result is not exactly what I was looking…

My mobile camera is too smart. When I take a photo, the result is not exactly what I was looking…

Leave a Reply

Your email address will not be published.