How to measure the resolution of a digital camera?

Ok, we have found that the number of pixels is not a good measure of the finesse and definition of detail that a camera can image. What would be a good measure then? Let us construct a simple “reference” picture of black and white lines of the same width. The thinner the lines are – the higher the density (or the frequency) is.

Line pairs of different density and width

As applied to digital images, we can talk about how many line pairs (or cycles) an image can hold, or how many cycles per pixel it can hold. The thinnest line width in an image is one pixel, so the maximum lines frequency an image can hold is one pair of lines per two pixels, or 0.5 cycles per pixel. That is obvious, but still has a strict mathematical proof, known as Nyquist theorem.

The same way we can talk about how many line pairs a camera can capture and deliver as the resulting image.

Let us model an “ideal” camera – with ideal lens (no blur, no distortions) and a sensor completely covered by an array of pixels. Every pixel will register a signal proportional to the amount of light it received.

How would such camera image a target of black-and-white lines, if the width of a line were exactly the same as the dimension of a pixel. The image will be quite different in case all the lines fall exactly to the pixels and in case the lines fall between the pixels:

Line pairs imaged by a 'simple' sensor

Luckily, the real scenes usually do not have exactly the same structure as the sensor has. To make our model more realistic, we will tilt the lines – so if in some part of the picture the edges of the lines match the edges of the pixels in the sensor, they will not match in the other parts. This is how the tilted lines will be imaged by our ideal camera:

Tilted line pairs pictured by a 'simple' sensor

The contrast between black and white lines differs from 100% of the original contrast to none. Nevertheless, the lines are still perceived clearly enough. Computed accurately, the average contrast in our snapshot would be 50% of the original contrast.

Taking a photo of similar target with lower density of lines will give higher average contrast.

Tilted lines of low frequency pictured by a 'simple' sensor

What happens if we try to image line pairs of higher frequency? See the pictures below: the lines are visible, but they have different direction, and moreover, thicker width – that is, lower frequency than in the original!

Sensor averages high frequency line pairs

This is caused by so-called aliasing. The sensor, which is not able to image a pattern of frequency higher than 0.5 cycles/pixel delivers not only lower contrast, but completely wrong picture.

Digital cameras usually have anti-aliasing filters in front of the sensors. Such filter prevents the appearance of aliasing artifacts, simply blurring high-frequency patterns.

Defining the measure

Now we can define a measure for sharpness basing on what we got from playing with an “ideal” camera.

The definition will differ depending on whether we use terms of density (or frequency) of detail per pixel or a total number of elementary details that can be imaged by the camera.

Density of line pairs or cycles: Number of line pairs (or cycles) per pixel that the camera can image with averagely 50% of original contrast

Number of line pairs in one dimension: Number of line pairs per whole width or height that the camera can image with averagely 50% of original contrast

Number of lines in one dimension: Number of lines per whole width or height that the camera can image with averagely 50% of original contrast

MTF50

The number of cycles per pixel imaged with 50% of original contrast is also called MTF50. That is, if we measure the values of contrast imaged at different line frequencies, and build a function MTF(f), where f is frequency and MTF(f) is the contrast imaged – the frequency at which this function achieves the value of 0.5 or (50 percent) would be called MTF50.

MTF50 of the ideal camera we modeled, with WxH pixel sensor, will be 0.5 line pairs per pixel or W/2 line pairs per width or H/2 line pairs per width or W lines per width or H lines per height.

Lens blur

How does the blurring introduced by a lens influence the resolution and sharpness? Having the contrast as a part of the definition clearly implies the lens blur. The blurry the lens is, the less contrast it will deliver, so the density of detail that can be imaged with 50% of original contrast is decreased.

Back to megapixels

So, is there still a sense of expressing the resolution in the term we got used to – megapixels? Yes. Having a real camera, capable of imaging W lines per width and H lines per height (with not less than 50% contrast!), we can say that it has “effective resolution” of W*H megapixels – that is the resolution which our ideal camera of W*H megapixels would have. However, you should always remember that such effective resolution would depend on lens blur, which varies at different apertures and focal lengths.

A well-known tool for testing camera performance, Imatest, calculates the effective resolution along with MTF. For example, a 6 Megapixel camera, Canon EOS 300D, equipped with EF50mm f/1.8 II lens showed effective resolution 3.15 Megapixel (details).

The measured effective resolution of 21 Megapixel Canon 1Ds mk III with EF 24-105 f/4L lens at 58mm f/8 is 9.26 MP. The finest pattern, which can be imaged with a 21 MP sensor, can still be resolved by Canon 1Ds mk III with that lens, but only 14% of original contrast would be imaged.

Same details captured, but on the right image the contrast is lower – details are worse defined:

Same details captured with different resolution

Luckily, the real scenes usually do not have exactly the same structure as the sensor has. To make our model more realistic, we will tilt the lines – so if in some part of the picture the edges of the lines match the edges of the pixels in the sensor, they will not match in the other parts. This is how the tilted lines will be imaged by our ideal camera:

Introduction to Resolution and Sharpness of digital images and digital cameras

As the photography is intended to image objects, the main concept of cameras and images is resolution – the ability of an imaging system to resolve detail in the object that is being imaged. Eye chart imaged with different resolutions

In the world of digital photography, the resolution is often – and often mistakenly – represented as the number of pixels (or megapixels – millions of pixels), the atomic items of digital picture.

While the number of pixels in an image limits the detail the image can hold, it does not guarantee certain level of detail. Enlarging the image increases the number of pixels, but does not add the detail.

The number of pixels in a camera also does not guarantee the detail; it only limits the finesse of detail the camera can capture.

Moreover, the resolution itself has not much sense alone. While showing the finesse or maximum density of detail that a camera can image (resolve), it does not show how fine it is resolved. That is why photographers talk not about the resolution alone, but about the clarity or sharpness – which is the combination of how much detail can be resolved (the resolution) and how fine it can be resolved, in other words – how properly the transition of brightness at edges is captured (the acutance). If we did not care about how well the edges are defined, we would not need 8 bit-per-pixel or 12 bit-per-pixel sensors – pure 1-bit sensors would be enough.

Test target images with different acutance

Test target pictured with different definition of edges

The detail a camera can capture is limited by the number of pixels in its sensor, but it is not the only limitation. The detail is limited by such factors as:

  • Lens blur: the part of a scene, which would be projected to a single pixel in the sensor, is partially projected to neighbor pixels
  • Scene captured with different noise levelNoise: random changes in brightness and color affect the accuracy of image detail. The noise-to-signal ratio grows as the less light is captured by a sensor – either the physical size of a pixel is small (and it gets smaller and smaller as you put more megapixels in the sensor, not increasing the size of the sensor itself!) or the lens area and the aperture are small, thus less light gets to he sensor.

That is why a professional camera having only 6 megapixels, but large sensor, equipped with precise lens with wide aperture gives more detail than a tiny consumer 12-megapixel camera.