Note: This is an old post and may or may not accurately represent my current views, current technology, etc.

With the new Nexus 5X and 6P as well as Samsung’s S6, Android phones are finally taking some impressive photos. Google has emphasized the importance of the 1.55 micrometer sensor pixels and most blogs have picked up on the importance of the larger pixel size and run with it. All things being equal, larger sensor pixels will allow better low light photos as well as reduced motion blur at the cost of fewer total pixels in the final image. Given that smartphone cameras are usually 12mp or more (above what the typical user needs), the trade off makes sense.

For those who care about the technical details of camera sensors, size and quantum efficiency are the important parts for determining how much of a signal a given pixel sees. Each pixel has the task of converting photons into electrons in order to measure the light, so a larger area means that more light is picked up. Quantum efficiency is how well the sensor converts photons into electrons. If 2 photons hit the sensor and 1 electron is generated, the quantum efficiency is 50%. Different colors of light are different frequencies, so the efficiency will vary across the spectrum.

The next important part about the sensor is how deep each “well” is. When the sensor pixel converts a photons into electrons, it stores those electrons in a well. The well has a limit to how many electrons it can store (called saturation capacity); once you hit that limit, the pixel’s output is going to be white. This is what creates blown out parts of an image where overexposure causes white highlights (such as when you’re taking a picture of a tree and the sky in the background is so much brighter that it turns out white instead of blue).

The other side of photography that you have to worry about is noise. Ideally a photo of an evenly lit gray card is going to have all the pixels show the same gray color, but noise is what causes variation. One of the sources of noise is called temporal dark noise or read noise, which sounds more complicated than it is. This type of noise is simply caused by not being able to perfectly accurately count how many electrons are in a well. That means two different wells might have the same number of electrons but they’re reported as having different values.

Another source of noise is shot noise. Mathematically, shot noise is the square root of the number of photons. The important thing to note about shot noise is that, being a square root, it is less significant with higher numbers. The square root of 4 is 2, which is a 2:1 ratio. The square root of 400 is 20, which is a 20:1 ratio. That means that with few photons, you’ll have a noisier picture than with more photons, which is why low light performance in cameras is always a challenge.

In order to decrease the effect of shot noise, you need to increase the number of photons. You can do that by using longer exposures, which is where technologies like optical image stabilization (OIS) help, decreasing the amount of blur when hand shooting a photo. You can do that by increasing the quantum efficiency of the sensor, which continues to happen as technology improves. You can also do that by increasing the size of the sensor pixels so that each one sees more light. Since the pixel has a height and a width (meaning you care about the surface area rather than just one dimension), increasing the size from say 1 micrometer to 1.5 micrometers increases the photons by 125%, which can make a world of difference.

There are a lot of other factors that go into the quality of the camera (e.g., white balancing), but it should be clear how big of an impact increasing the size of the sensor pixels can make. I’m looking forward to the competition among Android device manufacturers that will continue to push camera advances forward.