Image Sensors World: Oppo Smartphone Snaps 50MP Pictures

Engadget has had a chance to test Oppo Find 7 smartphone camera – a high end smartphone long rumored to feature 50MP sensor.

Although it’s hard to tell for sure, but it looks like Oppo has licensed Almalence Super Zoom technology. There is a comparison of Almalence Superresolution Zoom with Sony Clear Image Zoom and an article on its implementation in Huawei phones.


Almalence makes super resolution available on camera phones

Almalence, the developer of world’s first super resolution technology commercially available on desktop computers, announces that its solution becomes available in the most advanced smartphones in 2013.

Super resolution is now used to provide high quality pictures at high zoom levels in Huawei’s flagship models Ascend P2 and Ascend P6, SHARP’s new devices and emerging devices of several Korean and Taiwanese mobile phone makers.

Nowadays, the resolution of modern mobile phones is quite enough for amateur photography, unless it comes to zooming. With no optical zoom lens, the zoomed images are nothing more than upsized and sharpened low resolution crops. For example, 4x zoom on a 8 Megapixel mobile phone has 0.5 Megapixel effective resolution at best, not taking lens blur into account.

That’s where Almalence’s super resolution can be effectively used to enhance the quality of the images. While further improvement of sensor and lens seems either impossible or very expensive, software solutions allow going beyond the camera’s physical limits.

“Almalence’s Super Resolution Zoom is a pure software solution that does not require change of sensor or lens. This allows an extremely quick and cost-effective integration” – says Eugene Panich, CEO Almalence, Inc., – “in the beginning of 2013 we have integrated Super Resolution Zoom in quite different devices developed by several OEMs in very short timeframes and with almost no efforts required from OEMs.”

Huawei Ascend P2 Super Resolution Zoom Test

Huawei Ascend P2 was the first device with super resolution zoom on the market. These comparison photos were taken at MWC 2013 where P2 was presented. (Left: usual zoom; right: super resolution zoom)

Unlike edge enhancement and noise filtering techniques, super resolution provides real increase of effective resolution, making visible the details that are indistinguishable in normal shots. It virtually doubles the megapixel count of the camera. Capturing more details in zoomed area, it replaces optical zoom lens, for a trifling fraction of cost of such lens, and adding no single atom of weight and size to the device.

Huawei Ascend P6 Super Resolution Zoom test under 150 Lux: Resolution increases 1.9 times

“Spilled coins” (aka “Dead Leaves”) chart taken with Huawei P6 mobile phone (illuminance: 150 lux). Normal 2x zoom vs Super Resolution 2x Zoom. MTF30 measurement with ImaTest shows 1.9x effective resolution increase.

Almalence’s Super Resolution Zoom is a multi-frame technology, which adds benefit of drastic noise reduction in low light conditions.

Super Resolution Zoom under low light (20 Lux). Noise is reduced with no loss of detail. (Test with Huawei Ascend P6)

Images taken with Huawei P6 mobile phone under low light (20 Lux): Normal 2x zoom versus Super Resolution 2x Zoom

Super resolution zoom by Almalence can be used in quite different modes of operation. In some devices it is a background processing while in others it works real time in viewfinder.

Comparing to the technologies that imply physical improvement of the camera, such as using BSI sensors or 41 Megapixel sensors, Super Resolution Zoom is not just a competitor, but also a good addition that is compatible with almost any kind of sensor and lens and improves the image quality no matter how good the camera is.

Almalence’s nearest plan is to implement a hardware super resolution zoom using Tensilica IVP32 image processor, which will result in a solution capable of real time video processing and processing of high resolution still images in tens of milliseconds.

More  information:

Almalence Super Resolution vs SONY Clear Image Zoom (single-frame super resolution)

From time to time we are being asked to compare Almalence Super Resolution with other super resolution technologies available on the market, most of them single-frame super resolution. Our previous test was Panasonic Intelligent Zoom.

Today we will test SONY Clear Zoom Image technology, which is used in some SONY cameras for better quality of digital zoom.

We use SONY RX100 camera with Clear Zoom function.

First of all, let’s compare zoomed images taken with Clear Zoom disabled and enabled:

Top: standard zoom, bottom: Clear Image Zoom (SONY RX100)

Top: standard zoom, bottom: Clear Image Zoom (SONY RX100)

We see that Clear Image Zoom produces somewhat sharper image, and somewhere it adds a bit more details (for example, see the second hieroglyph in the center).

Now let’s compare to Almalence Super Resolution. We’ve taken a series of RAW images at the same zoom level and processed with PhotoAcute:

SONY Clear Image Zoom vs Almalence Super Resolution

SONY Clear Image Zoom vs Almalence Super Resolution

It’s easy to see that Almalence Super Resolution adds more details and is far superior to Clear Image Zoom. This is an expected result when comparing a single-frame resolution enhancement technology to a multi-frame super resolution.

More examples:

The web address and phone number unreadable in a Clear Image Zoom photo become readable in Super Resolution image:

Letters in the center of the image and “3F” at the bottom-right corner become readable:

Original images and a copy of the application used for processing are available upon request.

Test of Panasonic LX7 Intelligent Zoom

Panasonic features so-called Intelligent Zoom mode in Panasonic Lumix LX7 camera, claiming that with their “Intelligent Resolution” technology 3.8x optical zoom virtually extends to a 7.5x equivalent.
It sounds as single-frame super resolution, so we decided to take some test images to see how good is it.

Below: 3.8x versus 7.5x zoom

Panasonic Lumix LX7: 3.8x zoom and 7.5x (intelligent) zoom

Panasonic Lumix LX7: 3.8x zoom and 7.5x (intelligent) zoom

100% Crops. Top: 3.8x zoom, bottom: 7.5x zoom (with Intelligent Resolution)

Comparison of 3.8x zoom with 7.5x Intelligent Zoom (Panasonic Lumix LX7)

Top: 3.8x zoom, bottom: 7.5x Intelligent Zoom

As we can see, Intelligent Resolution does not add more details (so the resolution is not increased). The same result can be achieved with extra sharpening. See below a comparison between “Intelligent” zoom and bicubic upsize + unsharp mask filter in Photoshop:

Left: Panasonic Intelligent Resolution, right: generic zoom + bicubic upsize + unsharp mask

Left: Panasonic Intelligent Resolution, right: generic zoom + bicubic upsize + unsharp mask

Conclusion: Panasonic Intelligent Zoom (or Intelligent Resolution) delivers sharper images but does not provide higher resolution and in no way can compete with optical zoom.

Full original images available upon request

Superb quality zoom with newly available Huawei Ascend P2, first tests at MWC 2013

Camera became one of the things (if not the only one) that really make the difference in modern smartphones. Among the leading OEMs such as HTC, Sony and Intel who emphasize their new advanced camera features, Huawei offers something really unique – SuperZoom feature, allowing to take high quality images when zooming in with a mobile phone camera.

We had a chance to test SuperZoom feature with Huawei Ascend P2 phone at Huawei’s stand at Mobile World Congress. In brief, it works incredibly, see the pictures below.

Comparison: SuperZoom vs normal zoom, Huawei Ascend P2

Left: normal zoom, right: Super Zoom

SuperZoom versus ordinary zoom comparison, Huawei Ascend P2

Left: generic digital zoom, right: SuperZoom

Comparison: Super Zoom versus generic zoom, Huawei P2

Left: ordinary zoom, right: Super Zoom

SuperZoom on Huawei P2, sample image, test against usual zoom

Left: usual zoom, low detail; right: Super Zoom, high resolution and lot of details

Conclusion: SuperZoom is a revolutionary technology providing zoom quality comparable to optical zoom, on mobile devices!

Feb 25, 2013, Barcelona.

How to measure the resolution of a digital camera?

Ok, we have found that the number of pixels is not a good measure of the finesse and definition of detail that a camera can image. What would be a good measure then? Let us construct a simple “reference” picture of black and white lines of the same width. The thinner the lines are – the higher the density (or the frequency) is.

Line pairs of different density and width

As applied to digital images, we can talk about how many line pairs (or cycles) an image can hold, or how many cycles per pixel it can hold. The thinnest line width in an image is one pixel, so the maximum lines frequency an image can hold is one pair of lines per two pixels, or 0.5 cycles per pixel. That is obvious, but still has a strict mathematical proof, known as Nyquist theorem.

The same way we can talk about how many line pairs a camera can capture and deliver as the resulting image.

Let us model an “ideal” camera – with ideal lens (no blur, no distortions) and a sensor completely covered by an array of pixels. Every pixel will register a signal proportional to the amount of light it received.

How would such camera image a target of black-and-white lines, if the width of a line were exactly the same as the dimension of a pixel. The image will be quite different in case all the lines fall exactly to the pixels and in case the lines fall between the pixels:

Line pairs imaged by a 'simple' sensor

Luckily, the real scenes usually do not have exactly the same structure as the sensor has. To make our model more realistic, we will tilt the lines – so if in some part of the picture the edges of the lines match the edges of the pixels in the sensor, they will not match in the other parts. This is how the tilted lines will be imaged by our ideal camera:

Tilted line pairs pictured by a 'simple' sensor

The contrast between black and white lines differs from 100% of the original contrast to none. Nevertheless, the lines are still perceived clearly enough. Computed accurately, the average contrast in our snapshot would be 50% of the original contrast.

Taking a photo of similar target with lower density of lines will give higher average contrast.

Tilted lines of low frequency pictured by a 'simple' sensor

What happens if we try to image line pairs of higher frequency? See the pictures below: the lines are visible, but they have different direction, and moreover, thicker width – that is, lower frequency than in the original!

Sensor averages high frequency line pairs

This is caused by so-called aliasing. The sensor, which is not able to image a pattern of frequency higher than 0.5 cycles/pixel delivers not only lower contrast, but completely wrong picture.

Digital cameras usually have anti-aliasing filters in front of the sensors. Such filter prevents the appearance of aliasing artifacts, simply blurring high-frequency patterns.

Defining the measure

Now we can define a measure for sharpness basing on what we got from playing with an “ideal” camera.

The definition will differ depending on whether we use terms of density (or frequency) of detail per pixel or a total number of elementary details that can be imaged by the camera.

Density of line pairs or cycles: Number of line pairs (or cycles) per pixel that the camera can image with averagely 50% of original contrast

Number of line pairs in one dimension: Number of line pairs per whole width or height that the camera can image with averagely 50% of original contrast

Number of lines in one dimension: Number of lines per whole width or height that the camera can image with averagely 50% of original contrast


The number of cycles per pixel imaged with 50% of original contrast is also called MTF50. That is, if we measure the values of contrast imaged at different line frequencies, and build a function MTF(f), where f is frequency and MTF(f) is the contrast imaged – the frequency at which this function achieves the value of 0.5 or (50 percent) would be called MTF50.

MTF50 of the ideal camera we modeled, with WxH pixel sensor, will be 0.5 line pairs per pixel or W/2 line pairs per width or H/2 line pairs per width or W lines per width or H lines per height.

Lens blur

How does the blurring introduced by a lens influence the resolution and sharpness? Having the contrast as a part of the definition clearly implies the lens blur. The blurry the lens is, the less contrast it will deliver, so the density of detail that can be imaged with 50% of original contrast is decreased.

Back to megapixels

So, is there still a sense of expressing the resolution in the term we got used to – megapixels? Yes. Having a real camera, capable of imaging W lines per width and H lines per height (with not less than 50% contrast!), we can say that it has “effective resolution” of W*H megapixels – that is the resolution which our ideal camera of W*H megapixels would have. However, you should always remember that such effective resolution would depend on lens blur, which varies at different apertures and focal lengths.

A well-known tool for testing camera performance, Imatest, calculates the effective resolution along with MTF. For example, a 6 Megapixel camera, Canon EOS 300D, equipped with EF50mm f/1.8 II lens showed effective resolution 3.15 Megapixel (details).

The measured effective resolution of 21 Megapixel Canon 1Ds mk III with EF 24-105 f/4L lens at 58mm f/8 is 9.26 MP. The finest pattern, which can be imaged with a 21 MP sensor, can still be resolved by Canon 1Ds mk III with that lens, but only 14% of original contrast would be imaged.

Same details captured, but on the right image the contrast is lower – details are worse defined:

Same details captured with different resolution

Luckily, the real scenes usually do not have exactly the same structure as the sensor has. To make our model more realistic, we will tilt the lines – so if in some part of the picture the edges of the lines match the edges of the pixels in the sensor, they will not match in the other parts. This is how the tilted lines will be imaged by our ideal camera:

Introduction to Resolution and Sharpness of digital images and digital cameras

As the photography is intended to image objects, the main concept of cameras and images is resolution – the ability of an imaging system to resolve detail in the object that is being imaged. Eye chart imaged with different resolutions

In the world of digital photography, the resolution is often – and often mistakenly – represented as the number of pixels (or megapixels – millions of pixels), the atomic items of digital picture.

While the number of pixels in an image limits the detail the image can hold, it does not guarantee certain level of detail. Enlarging the image increases the number of pixels, but does not add the detail.

The number of pixels in a camera also does not guarantee the detail; it only limits the finesse of detail the camera can capture.

Moreover, the resolution itself has not much sense alone. While showing the finesse or maximum density of detail that a camera can image (resolve), it does not show how fine it is resolved. That is why photographers talk not about the resolution alone, but about the clarity or sharpness – which is the combination of how much detail can be resolved (the resolution) and how fine it can be resolved, in other words – how properly the transition of brightness at edges is captured (the acutance). If we did not care about how well the edges are defined, we would not need 8 bit-per-pixel or 12 bit-per-pixel sensors – pure 1-bit sensors would be enough.

Test target images with different acutance

Test target pictured with different definition of edges

The detail a camera can capture is limited by the number of pixels in its sensor, but it is not the only limitation. The detail is limited by such factors as:

  • Lens blur: the part of a scene, which would be projected to a single pixel in the sensor, is partially projected to neighbor pixels
  • Scene captured with different noise levelNoise: random changes in brightness and color affect the accuracy of image detail. The noise-to-signal ratio grows as the less light is captured by a sensor – either the physical size of a pixel is small (and it gets smaller and smaller as you put more megapixels in the sensor, not increasing the size of the sensor itself!) or the lens area and the aperture are small, thus less light gets to he sensor.

That is why a professional camera having only 6 megapixels, but large sensor, equipped with precise lens with wide aperture gives more detail than a tiny consumer 12-megapixel camera.