Ok, we have found that the number of pixels is not a good measure of the finesse and definition of detail that a camera can image. What would be a good measure then? Let us construct a simple “reference” picture of black and white lines of the same width. The thinner the lines are – the higher the density (or the frequency) is.
As applied to digital images, we can talk about how many line pairs (or cycles) an image can hold, or how many cycles per pixel it can hold. The thinnest line width in an image is one pixel, so the maximum lines frequency an image can hold is one pair of lines per two pixels, or 0.5 cycles per pixel. That is obvious, but still has a strict mathematical proof, known as Nyquist theorem.
The same way we can talk about how many line pairs a camera can capture and deliver as the resulting image.
Let us model an “ideal” camera – with ideal lens (no blur, no distortions) and a sensor completely covered by an array of pixels. Every pixel will register a signal proportional to the amount of light it received.
How would such camera image a target of black-and-white lines, if the width of a line were exactly the same as the dimension of a pixel. The image will be quite different in case all the lines fall exactly to the pixels and in case the lines fall between the pixels:
Luckily, the real scenes usually do not have exactly the same structure as the sensor has. To make our model more realistic, we will tilt the lines – so if in some part of the picture the edges of the lines match the edges of the pixels in the sensor, they will not match in the other parts. This is how the tilted lines will be imaged by our ideal camera:
The contrast between black and white lines differs from 100% of the original contrast to none. Nevertheless, the lines are still perceived clearly enough. Computed accurately, the average contrast in our snapshot would be 50% of the original contrast.
Taking a photo of similar target with lower density of lines will give higher average contrast.
What happens if we try to image line pairs of higher frequency? See the pictures below: the lines are visible, but they have different direction, and moreover, thicker width – that is, lower frequency than in the original!
This is caused by so-called aliasing. The sensor, which is not able to image a pattern of frequency higher than 0.5 cycles/pixel delivers not only lower contrast, but completely wrong picture.
Digital cameras usually have anti-aliasing filters in front of the sensors. Such filter prevents the appearance of aliasing artifacts, simply blurring high-frequency patterns.
Defining the measure
Now we can define a measure for sharpness basing on what we got from playing with an “ideal” camera.
The definition will differ depending on whether we use terms of density (or frequency) of detail per pixel or a total number of elementary details that can be imaged by the camera.
Density of line pairs or cycles: Number of line pairs (or cycles) per pixel that the camera can image with averagely 50% of original contrast
Number of line pairs in one dimension: Number of line pairs per whole width or height that the camera can image with averagely 50% of original contrast
Number of lines in one dimension: Number of lines per whole width or height that the camera can image with averagely 50% of original contrast
MTF50
The number of cycles per pixel imaged with 50% of original contrast is also called MTF50. That is, if we measure the values of contrast imaged at different line frequencies, and build a function MTF(f), where f is frequency and MTF(f) is the contrast imaged – the frequency at which this function achieves the value of 0.5 or (50 percent) would be called MTF50.
MTF50 of the ideal camera we modeled, with WxH pixel sensor, will be 0.5 line pairs per pixel or W/2 line pairs per width or H/2 line pairs per width or W lines per width or H lines per height.
Lens blur
How does the blurring introduced by a lens influence the resolution and sharpness? Having the contrast as a part of the definition clearly implies the lens blur. The blurry the lens is, the less contrast it will deliver, so the density of detail that can be imaged with 50% of original contrast is decreased.
Back to megapixels
So, is there still a sense of expressing the resolution in the term we got used to – megapixels? Yes. Having a real camera, capable of imaging W lines per width and H lines per height (with not less than 50% contrast!), we can say that it has “effective resolution” of W*H megapixels – that is the resolution which our ideal camera of W*H megapixels would have. However, you should always remember that such effective resolution would depend on lens blur, which varies at different apertures and focal lengths.
A well-known tool for testing camera performance, Imatest, calculates the effective resolution along with MTF. For example, a 6 Megapixel camera, Canon EOS 300D, equipped with EF50mm f/1.8 II lens showed effective resolution 3.15 Megapixel (details).
The measured effective resolution of 21 Megapixel Canon 1Ds mk III with EF 24-105 f/4L lens at 58mm f/8 is 9.26 MP. The finest pattern, which can be imaged with a 21 MP sensor, can still be resolved by Canon 1Ds mk III with that lens, but only 14% of original contrast would be imaged.
Same details captured, but on the right image the contrast is lower – details are worse defined:
Luckily, the real scenes usually do not have exactly the same structure as the sensor has. To make our model more realistic, we will tilt the lines – so if in some part of the picture the edges of the lines match the edges of the pixels in the sensor, they will not match in the other parts. This is how the tilted lines will be imaged by our ideal camera:
3 responses to “How to measure the resolution of a digital camera?”
Do you have a great Blog.
I really like explains everything with images without having to use complex formulas of frequency, Nyquist,…
Congratulations.
thanks 🙂
Amazing way of explanation, You defined camera resolution concept in very easy and understandable way