Why the 48MP sensor is not the most significant camera feature this year? iPhone 14 pixels

Why the 48MP sensor is not the most significant camera feature this year? iPhone 14 pixels ...

Pixels are essentially the iPhone 14 Pixels. Plus, the iPhone 14 Pro pixels. Because while the headline news is that the most recent Pro models have a 48MP sensor instead of a 12MP one, that is not the most significant improvement Apple has made to this years camera.

Even of the four most powerful changes this year, the 48MP sensor is for me the least significant. But bear with me this as there are a few things we must get rid of before I can explain why I think the 48MP sensor is far less important than:

One 48MP sensor, two 12MP ones

We talk about the iPhone camera in the singular, and then refer to three different lenses: main, wide, and telephoto. We do this because thats how DLSRs and mirrorless cameras work, one sensor, multiple (interchangeable) lenses, and because thats the illusion Apple creates in the camera app, for ease.

The reality is, of course, different. The iPhone actually has three camera modules. Each module is distinct, and each sensor is different. When you tap on the 3x button, you are not only picking the telephoto lens, but you are also switching to a different sensor. When you slide-zoom, the camera app automatically and invisibly selects the appropriate camera module, and then cropping.

Only the main camera module has a 48MP sensor; the other two modules have 12MP ones.

When it introduced the new models, Apple was completely up-front, but it is an important fact that some might have missed (our emphasis):

For the first time, the Pro lineup is equipped with a new 48MP Main camera that adjusts to the photo being captured, and a quad-pixel stabilization system.

The 48MP sensor works part-time

Even when you are using the main camera, with its 48MP sensor, you are still using 12MP photos by default. Apple:

For the majority of photographs, a quad-pixel processor divides every four pixels into one large quad quadpixel.

The only time you shoot in 48 megapixels is when:

  • You are using the main camera (not telephoto or wide-angle)
  • You are shooting in ProRAW (which is off by default)
  • You are shooting in decent light

If you want to do this, here''s how to get it done. However, you won''t be disappointed.

Apples approach makes sense

You might ask yourself: why should we provide us with a 48MP sensor, and then mostly not use it?

Because, in fact, shooting in 48MP is better than shooting in 12MP, it makes no sense to have this as the default.

I can imagine two scenarios where shooting a 48MP image is a great thing to do:

This second aspect is a bit controversial, because if you need to utilize it heavily, you might be better off using the 3x camera.

Now lets talk sensor size

There are two major differences when it comes to comparing a smartphone with a DSLR or a mirrorless camera of high quality.

One of them is the quality of lenses. Standalone cameras may be superior because of their physical appearance, but also because of their costs. It is not unusual for a professional or keen amateur photographer to spend a four-figure sum on a single lens. Smartphone cameras of course cant compete with that.

The second is the degree of sensor size. All other things being equal, the larger the sensor, the better the image quality. Smartphones, due to the exact size of their screen, and all other gadgets they need to connect, have much smaller sensors than standard cameras. (They also have a very limited depth, which does not allow us to go into that.)

The image quality is being reduced by a smartphone-sized sensor, which makes it difficult to achieve a thin field, which is why the iPhone does this artificially, thanks to Portrait mode and cinematic video.

Apple''s big sensor and a limited megapixel strategy

While there are a few limitations to the sensor size you may use in a smartphone, Apple has historically used bigger sensors than other smartphone brands, which is part of the reason the iPhone was long considered as the best smartphone for camera quality. (Samsung later switched to doing this, too.)

There is a second reason. You may also desire that the pixels to be as large as possible if you want the best quality images from a smartphone.

This is why Apple has lowered its maximum of 12MP, while brands like Samsung have added up to 108MP in the same size sensor. Noise is greatly increased in low-light photographs.

Ok, I took a while to get there, but now, I can, finally, explain why I believe the larger sensor, the pixel-binning, and the Photonic Engine are a far greater deal than the 48MP sensor.

#1: iPhone 14 Pro/Max sensor is 65% larger

This year, the main camera sensor in the iPhone 14 Pro/Max is 65% higher than the one in the previous years model. However, it''s still nothing compared to a standalone camera, but for a smartphone camera, it''s (pun intended) massive!

If Apple squeezed four times as many pixels into a sensor only 65% larger, it would actually improve the quality! This is precisely why youll mostly still be shooting 12MP images. And thats owing to the fact that Apple only squeezed four times as many pixels into a sensor!

#2: Pixel-binning

Apple employs a pixel-binning technique to shoot 12MP images on the main camera. This means that the data from four pixels is divided into one virtual pixel (averaging the values), thus the 48MP sensor is mostly being used as a larger 12MP one.

This illustration is simplified, but it gives the audience the fundamental notion:

What does this mean? Pixel size is measured in microns (one millionth of a meter). Most premium Android smartphones have pixels that are positioned in the range 1.1 to 1.8 micron. The iPhone 14 Pro/Max, when used in 12MP mode, effectively has pixels measuring 2.44 microns. That is, though, a significant improvement.

The 48MP sensor would most of the time be a downgrade if it wasn''t done without pixel-binning.

#3: Photonic Engine

In terms of optics and physics, smartphone cameras can''t compete with standalone cameras, but here''s where they may compete, according to computational photography.

Computational photography has been used in SLRs for decades. For example, when you change metering techniques, it is instructing the computer inside your DLR to interpret raw data in a different way. Similarly, in consumer DSLRs and mirrorless cameras, you may select from a wide range of photo modes, which again instructs the microprocessor how to adjust the data from the sensor to achieve the desired result.

So computational photography has already played a much larger role in standalone cameras than many realize. And Apple is very, very good at computational photography. (Ok, its not yet good at Cinematic video, but give it a few years )

The Photonic Engine is a dedicated processor that powers Apple''s Deep Fusion approach to computational photography. Im already seeing a huge difference in the dynamic range in photographs. (Examples to follow in an iPhone 14 Diary piece next week.) Not just the range itself, but in the intelligent decisions being made about which shadow to bring out, and which highlights to tame.

The result of the improvements in photos, which have as much to do with the software as the hardware.

Wrap-up

When it comes to image quality, a dramatically bigger sensor (in smartphone terms) is a real deal.

Pixel-binning means that Apple has effectively created a much larger 12MP sensor for the majority of photos, making it possible to reap the benefits of the larger sensor.

The Photonic Engine is a hardware store used to process image content. Im already seeing the potential of this technology.

When I put the camera to a more thorough test over the next few days, here''s a few things to follow in an iPhone 14 Diary piece.