Next Generation of Human Eye Simulator Prevents Eye Tracker Failures by Eliminating Unwanted Reflections with New IR Filter

Almalence introduces the next generation of its Human Eye Simulator, now making the simulator look exactly like a human eye by completely eliminating the unwanted pupil reflections in the IR specter.

A closer look at the evolution of how Almalence Human Eye Simulator looked like to the eye trackers in IR specter:

First generation:

In this IR capture from an eye-tracking camera, you can see the nicely defined glints. These are produced by eye-tracking system IR projectors and reflect off the synthetic cornea.

You can, however, also see some internal structure within the eye pupil. That structure is in fact the camera lens inside the eye simulator. Its presence in the captured image can distract the eye-tracking system. A real eye has no internal structures visible under IR illumination.

Second generation:

In the next generation of the Simulator, Almalence took care of the above issue by adding a conventional IR-cut filter.

Such filters are commonly sandwiched between the lens and the image sensor in digital cameras to prevent IR illumination to which the digital sensors are sensitive. The IR component of the light presents under some lighting conditions, and if not filtered out results in incorrect colors in the captured image.

Adding the IR filter to the Simulator worked very well for concealing the internal lens structure behind the pupil. Another issue remained though: under specific relative orientations of the simulated eye to the eye-tracking system, the surface of the filter was now producing a reflected image of the eye-tracking system itself.

In the image above, captured by an eye-tracker camera, you can see a reflection of that camera and its lens as well as the white background behind it.

Third generation:

In the third generation of the Simulator we implemented a custom-designed, non-flat, non-reflecting IR filter, achieving two major improvements over the previous design:

  1. No unwanted reflections inside the pupil, regardless of the simulator orientation.
  2. The non-flat surface has improved the overall MTF of the system and eliminated the unwanted reflections of the light from the IR LEDs that appear at some specific orientations and would blind the eye-tracking system.

With the design improvements made in the third generation Eye Simulator, Almalence achieved its robust operation and eliminated eye-tracking failures across all possible eye orientations and gaze angles, making the Simulator the ultimate solution for image capturing, optical profiling, and quality measurement of VR/AR head-mounted displays equipped with eye-tracking.

Almalence SuperResolution Zoom selected by Qualcomm Technologies and ASUS for the Smartphone for Snapdragon Insiders for its superior image quality

AUSTIN, Texas, Aug. 18, 2021 /PRNewswire/ — On July 8th, Qualcomm Technologies, Inc. announced the Smartphone for Snapdragon Insiders, which was designed and manufactured by Asus. Almalence SuperResolution Zoom was selected to give the flagship device the best possible camera image quality under zoom conditions.

Almalence SuperResolution Zoom is a multi-shot computational camera solution that reduces image noise while increasing resolution. The use of SuperResolution Zoom allows a camera to achieve better performance than would be possible through optics alone.

Almalence has a successful history of integration and continuous improvement supporting OEMs utilizing Snapdragon® mobile platforms to help improve their product’s cameras. The effectiveness of Almalence SuperResolution Zoom has contributed to customers achieving some of the recent highest DXO Mark scores in the industry with better image quality than higher focal length and higher resolution/bigger sensor camera designs. The algorithm process is also fast and power efficient – optimized to run entirely on the Qualcomm® Hexagon™ 780 processor, rather than the CPU or GPU. This reduces the time and power required to process the highest resolution images.

“Almalence SuperResolution Zoom technology is very complementary to Snapdragon platforms supporting the Hexagon processor. Qualcomm Technologies’ processing architecture is well-suited to the algorithm workload and fits within customer processing time requirements,” explained Judd Heape, vice president of product management, Qualcomm Technologies, Inc.

As a member of the Qualcomm® Platform Solutions Ecosystem, there has been close collaboration between Almalence and Qualcomm Technologies with a common goal of delivering customers the best possible imaging experience on Snapdragon hardware. Almalence has continued porting and optimization of the algorithm to the Hexagon processor, and camera system pipeline to maximize the power/performance efficiency. Dan Sakols, Almalence VP of Business Development added, “we have kept ahead of our customers image quality and SuperResolution Zoom performance demands with each new Snapdragon mobile platform release. We leverage all of the Hexagon processor capabilities.”

Qualcomm, Snapdragon, and Hexagon are trademarks or registered trademarks of Qualcomm Incorporated. Snapdragon and Qualcomm Hexagon are products of Qualcomm Technologies, Inc. and/or its subsidiaries. Qualcomm Platform Solutions Ecosystem is a program of Qualcomm Technologies, Inc. and/or its subsidiaries.

Almalence, a Texas corporation, creates computational imaging technologies allowing cameras, machines, and humans to see more.  Almalence products are designed to improve camera image quality, AR/VR display quality, and assist machine vision algorithms to predict more accurately.  For more information contact Dan Sakols, VP Business Development.

Near-eye Display Optic Deficiencies and Ways to Overcome Them

Achieving high picture quality, optical fidelity, and natural visual experience is a challenge in near-eye display design.


  • Why the conventional approaches do not work for near-eye display optical design;
  • What specific methods exist, and why they are still fundamentally limited;
  • How the computational optical correction techniques allow overcoming those limits.

Read a comprehensive article by Almalence CTO Dmitry Shmunk, originally presented at SPIE AR/VR/MR 2021 conference:

An ideal imitation of the human eye enables the precise measurement of near-eye and head-mounted display quality

The all-new 2021 version of the Almalence Human Eye Simulator. Optically clear. Eye Tracking ready.

To assess the quality of head-mounted displays, it is necessary to capture images which exactly match with what a human eye would perceive. Indeed, a capturing device has to be capable of accurately replicating the human eye’s optical properties. If they are not, then this could lead to some drastic irregularities – a mismatch of entrance pupil diameter, for instance, would lead to quite different blurs and aberrations, or even sometimes visible Fresnel rings, which are not apparent to the human eye. Once you do have a capturing device in place that can match the optical properties of the human eye, however, then now comes the real challenge: the device has to be recognized as an “eye” by eye trackers – otherwise, there will simply be no chance of capturing a correct picture, as a wrong picture would be displayed in the first place, in case the HMD uses eye position-dependent rendering techniques like foveated rendering or dynamic aberrations correction – which have recently been becoming standard for high-quality near-eye displays.

Almalence, a pioneer in designing the eye-imitating cameras, has now begun to roll out an all-new and updated version of its powerful eye simulator, better than ever and ideally suited for near-eye display picture capturing and quality measurement tasks. It features made-to-order, optically clear eye corneas, flawlessly creating a perfect match to the form of a real, human eye that is indistinguishable for eye trackers – unlike other, off-the-shelf parts and solutions, which commonly result in deviations from the proper shape. A clear aperture for up to a 120° FOV enables the seamless capture of up to the entire field of view in one shot, without ever compromising the contrast and MTF of the true visible picture.

The ideal profile of both visible and IR light absorption and reflection is painstakingly implemented, in order to make the simulator’s iris look exactly like a natural iris to an eye tracker. An additional IR-cut filter also prevents unwanted reflections from the camera lens, which may spoof eye tracker readings.

Auto-focus capability, to avoid having to manually adjust the focal point when moving the “eye” inside head-mounted displays that exhibit a significant field curvature.

The platform encapsulates multiple capturing camera designs, including a 100° field of view camera which enables the user to capture the entire visible FOV in one shot. This feature is also quite useful for geometry distortion measurements. Specially designed narrow 78- and 34-degree FOV cameras are also included, engineered for high-precision optical measurements, including apparent resolution, chromatic aberrations, and more.

A monochromatic camera can also optionally be used with the eye simulator, in order to resolve ambiguity in color channel mixing between the HMD display and CFA filters inside the camera.

Almalence has also developed a powerful software for the processing and transforming of captured images, so that they can readily be used for correct measurements of geometry, MTF, channel crossing, and other quality characteristics, with industry-standard tools such as ImaTest. Together with a 6-DOF robo-arm and its controlling software, all of the above-mentioned features seamlessly combine to present a complete, easy-to-implement tool for head-mounted displays, picture quality assessment, and the profiling of geometry and aberrations correction.

Almalence SuperResolution now supports Mediatek APU

We are happy to announce that our SuperResolution is now fully ported to Mediatek APU, an AI processor powering high-end Mediatek smartphone chipsets.

With the addition of the APU version, Almalence SR now supports the high-end chipset DSPs of both major chipset makers, Qualcomm and Mediatek.

Running SuperResolution image processing on a specialized DSP brings the following advantages:

  • Much faster processing, up to real-time at video frame rates
  • Some 10-20 times lower power consumption
  • CPU offload, no blocking of UI and other applications
  • More sophisticated processing within a shorter time

In 2021, Almalence SuperResolution will be used to achieve unprecedented camera zoom quality on top smartphones powered by the newest Qualcomm 888 and Mediatek Dimensity chipsets.

Microsoft vs Almalence SuperResolution Zoom

While the number one thing that differentiates Microsoft’s new productivity device, the Surface Duo, is the new form factor, we are mostly interested in its camera performance, namely, zoom capability. Having a single camera module, the Surface Duo is said to have enhanced zoom quality by using a super-resolution zoom algorithm.

We were eager to compare Microsoft vs. Almalence Super Resolution Zoom performance, and here it goes – below are comparison results at 7x zoom (max available zoom factor on the Duo).

Siemens Star test chart, 7x zoom, Left: Microsoft Zoom, Right: Almalence SuperResolution Zoom

We particularly like Spilled Coins (aka Dead Leaves) chart, for it is nearly impossible to fake high resolution with it using sharpening. It also extremely well exposes detail loss due to noise reduction, highlighting the advantage of algorithms that suppress the noise without loss of detail.

DxOMark chart, Spilled Coins, 7x zoom. Left: Microsoft Duo built-in, Right: Almalence SuperResolution

Text readability test usually works well too:

Left: Microsoft Surface Duo 7x zoom; Right: Almalence Super Resolution Zoom

To summarize: while using a single camera module seems to make sense for the Surface Duo niche, Microsoft could have used a more powerful super-resolution technique to mitigate the device’s zoom quality limitation.

Huawei P40 Pro Super Resolution Zoom: How Good It Is and How Much Better It Could Be

The Huawei P40 Pro looks like the winner in the smartphone zoom race, achieving DxOMark Zoom Score of 115, the all-time high. However, its outstanding camera hardware, utilizing a 125mm f/3.4 telephoto module, is not the only key to success. Achieving the best results would not be possible without using a computational super resolution zoom technology. We did some testing to check how good it is and if better results could be achieved if using the leading Super Resolution technology from Almalence.

We will start with a side-by-side comparison and then discuss some interesting features of Huawei’s SR which we found during the testing. For the testing, we captured:

  • several JPEG images with the built-in camera app at 10x, those came out pretty different so we used the best one for comparison;
  • a series of RAW images with the 5x telephoto camera module, which were then processed with 2x Almalence Super Resolution.

The pictures were captured indoor, in good office lighting (~700 Lux). Note, as we used RAW images for processing, the colors in Almalence SR output are somewhat off.

Comparing the ability to resolve fine details shows a dramatic improvement when using Almalence Super Resolution Zoom:

Left: Huawei P40 Pro built-in 10x zoom; Right: Almalence Super Resolution Zoom

A strange effect in the next example, most of the fine text is “washed out” in the P40 Pro image. That can be caused by extreme noise filtering or input frames misalignment/deghosting.
Also note the highlighted character. It looks like Huawei’s algorithm employs a kind of a neural network, which tried to “guess” the object but in this case made a wrong guess. (We will show more examples of that NN’s job below)

Left: Huawei P40 Pro built-in 10x zoom; Right: Almalence Super Resolution Zoom

A testing with a wedge chart, Almalence SR Zoom increases the effective resolution by ~20..25% more than Huawei’s built-in algorithm:

Left: Huawei P40 Pro built-in 10x zoom; Right: Almalence Super Resolution Zoom

Getting back to the P40 Pro’s [supposedly] neural network, an interesting example below. First of all, the NN did an absolutely fantastic job resolving the hair (look at the areas 1 and 2). This looks like something beyond the normal capabilities of super resolution algorithms, which makes us convinced a neural network was involved. Exploring the image further, however, we can see that in some areas (e.g. area 3) the picture looks very detailed but actually unnatural (and yes, different from the original), so the NN made a visually nice, but actually a wrong guess. In the area 4, the algorithm “resolved” the eye in a way that it distorted the eyelid and iris geometry, making the two eyes looking at different directions; it also guessed the bottom eyelashes in a way that they look like growing from the eyeball, not the eyelid, which looks rather unnatural.

Left: Huawei P40 Pro built-in 10x zoom; Right: Almalence Super Resolution Zoom.
Huawei’s result looks more detailed, however in some areas those details are unnatural and do not reflect the original object.

To summarize, while the Huawei P40 Pro is clearly the winner in telephoto camera module hardware design, its computational zoom algorithm is not yet doing the best possible job. While having some advantages over Almalence’s Super Resolution Zoom in resolving certain kinds of objects, it could be better in terms of overall resolution capability. It would be really interesting to see what those algorithms could do if combined together, likely that would make an all-time best digital zoom technology.

A Zoom Technology Missing from iPhone 11 Pro

Despite having a telephoto camera module, iPhone 11 Pro zoom is still far behind the top performers which use Super Resolution Zoom.

Zoom has recently become one of the most important features of smartphone cameras with the leading OEMs advertising their devices achieving high picture quality at sometimes crazy zoom levels.

As every high-end smartphone, iPhone 11 Pro uses a dedicated telephoto camera module to achieve the maximum zoom quality. It appears however, that simply utilizing a telephoto module, even of a great design and quality which is undoubtedly the case with an Apple’s product, is not enough to achieve the top zoom performance. According to the DxOMark benchmark, iPhone 11 Pro achieves Zoom Score of 74 while, for example, Xiaomi Mi 10 Pro hits 110, a drastic 1.5x difference!

To go beyond the camera hardware capabilities, top Zoom performers utilize a computational imaging technique, Super Resolution Zoom. As its name suggests, it uses super resolution technique to increase the resolution of the images suffering from the lack of pixels in case the target zoom level exceeds the optical zoom of the telephoto module.

For example: zooming 4x with a 12 MP 2x telephoto module uses only 1/4 of its sensor, or just 3 Megapixels.

Besides improving the resolution, Super Resolution Zoom also increases the SNR, lost due to small aperture of a telephoto module, the higher the optical zoom level – the smaller is the aperture.

We made a few tests to check how Almalence Super Resolution Zoom, the most advanced digital zoom technology, would improve iPhone 11 Pro zooming capabilities. Check a couple of examples below:

iPhone 11 Pro, 4x zoom. Left: iPhone as is, Right: with Almalence Super Resolution Zoom

iPhone 11 Pro, 4x zoom. Left: iPhone as is, Right: with Almalence Super Resolution Zoom

The pictures speak for themselves. Apple can definitely achieve better zoom picture clarity by utilizing a computational super resolution technology.