Image quality (IQ) determines how accurately a camera captures the real world. In embedded vision applications, from industrial automation to medical diagnostics, the clarity and accuracy of an image directly impact the performance of the full system.
As a result, image quality becomes a key consideration when selecting a camera for any application. Different components of the camera, like as the lens, sensor, and firmware, contribute to image quality. Many image parameters influence image quality, making it a complex and challenging task to measure accurately. Properly assessing a camera’s image quality requires a deep understanding of such contributing factors.
Understanding these parameters is crucial for producing high-quality images that truly represent reality. In this blog, you’ll find out the key parameters that define image quality, including signal-to-noise ratio, dynamic range, and quantum efficiency. You’ll also learn how these parameters are objectively evaluated for superior imaging performance across applications.
Understanding Image Quality
Image quality is an assessment of an image’s fidelity to the original scene. Multiple factors, including color reproduction, distortion, sharpness, and noise levels, influence it. Different lighting conditions affect how images appear, making validation under various conditions a critical need.
Let us see how various parameters contribute to the overall quality of an image.
What are its Key Image Quality Parameters?
Color Accuracy
Color accuracy describes how the camera reproduces colors. When light falls on an object, for instance, a red apple, it reflects only the red wavelengths and absorbs the rest of the light spectrum. The reflected red light is then processed by the human brain, allowing us to recognize the object as an apple. Similarly, all colors are perceived through the reflection of specific light wavelengths. White reflects all visible light, while black absorbs it entirely, reflecting none; hence, black is often considered the absence of color.
The color accuracy is validated using the color checker chart.
Figure 1: Color Checker Chart
The color checker chart is a standard color checker consisting of 24 color patches, including six neutral colors (grayscale), as well as primary and secondary colors for a comprehensive evaluation.
The gray color patches in the last row are particularly useful for gamma and white balance evaluations.
It can also be validated by the ColorChecker Digital SG Chart provides 140 colors, including:
- 24 patches from the original ColorChecker
- 17-step grayscale
- 14 unique skin tone colors
If you’re looking for a deeper understanding of color reproduction and how it’s measured, check out our detailed blog: What is Color Accuracy? How to Measure Color Accuracy? |
White Balance (WB)
White balance determines how accurately white is retained across different lighting conditions. The light source, such as the sun, creates an orange tone in the morning and a blueish tone depending on the time of day. However, effective white balance eliminates these tints, ensuring that white remains consistently white.
Auto White Balance (AWB) functionality enables cameras to automatically adjust to different lighting conditions through tuning.
Figure 3: Comparison of Image Output – Before (Right) and After (Left) Auto White Balance Tuning
📘 Related read: Auto White Balance Calibration for an Embedded Camera Lens – Learn how AWB tuning works in embedded systems and why proper calibration is essential for image quality. |
Lens Distortion
Distortion is the bending of straight lines in an image, appearing as follows:
- Barrel distortion (negative value) – lines bend outward
- Pincushion distortion (positive value) – lines bend inward
- Mustache distortion (Waveform distortion) – It is a combination of lines that bend outward and inward, simply putting a combination of Barrel and Pincushion distortion.
- Keystone distortion: This distortion occurs when the camera’s sensor plane is not parallel to the plane of the capturing object, causing a trapezoidal effect in the image.
This parameter is evaluated using dot pattern charts and can be corrected through calibration.
Chromatic Aberration
Chromatic aberration happens due to different colors of light bending at slightly different angles when passing through a lens. This causes colors to focus at different points on the sensor, resulting in colored fringes.
The two types of chromatic aberration are:
- Lateral chromatic aberration: Different wavelengths fall on different points on the image plane
- Longitudinal chromatic aberration: Different wavelengths fall on different image planes
Lateral aberration is more easily visible in images, while longitudinal aberration requires analyzing image sequences captured at varying distances. This parameter is evaluated using dot pattern charts.
Chromatic aberration is one of several types of lens aberrations that can affect image quality. To explore all types and learn how to reduce them through lens design and image signal processing, refer to our blog: What Causes Lens Aberrations? An Insight on Types of Lens Aberrations and How to Minimize Them |
Lens Shading/Vignetting
Lens shading refers to the decrease in the image brightness from the center to the edge of the image. The brightness variation affects the overall image quality. This can be corrected through lens calibration and tuning.
To further understand the causes, types, and practical approaches to correct it—read our detailed blog: What is Lens Vignetting in Embedded Cameras? |
Dynamic Range (DR)
The dynamic range represents the camera’s ability to capture both highlights and shadows in a scene. It’s measured in decibels (dB).
DR is evaluated using specialized charts:
- ITU-HDR transmissive test chart (with 36 density steps from 0.10 to 8.22)
- Contrast Resolution Chart (with 20 density steps from 0.15 to 4.9)
Low Light Performance
This parameter assesses how well a camera performs in limited lighting conditions. It’s evaluated using the eSFR ISO Chart, which contains:
- Wedges and slanted edges for MTF50 calculation
- 20-patch OECF for measuring noise, SNR, and DR
- Color patches for checking color reproduction
Signal-to-Noise Ratio (SNR)
SNR refers to the ratio between the maximum signal and overall noise. Higher SNR values indicate better image quality with less visible noise. It is evaluated using the eSFR ISO Chart.
To explore the fundamentals of SNR, why it matters, and how it impacts embedded camera performance, read our detailed blog: What is Signal-to-Noise Ratio (SNR)? Why is SNR Important in Embedded Cameras?. |
Sharpness
Sharpness determines how clearly edges are defined in an image. It measures how many pixels are required to transition from a dark area to a bright area. Metrics like MTF (Modulation Transfer Function) at different levels (10, 20, 50) are used to validate sharpness.
Flare
Flare occurs due to light scattering inside the camera, reducing overall contrast and affecting dynamic range. It’s also evaluated using ISO 18844 & P2020 Flare charts.
Figure 8: Lens flare reducing contrast and causing bright spots (highlighted in red)
e-con Systems’ High Image Quality Cameras for Embedded Vision Solutions
Since 2003, e-con Systems specializes in designing, developing, and manufacturing high-performance cameras for embedded vision applications. Our cameras offer advanced features such as HDR, low-light optimization, resolutions up to 20MP, NIR, global shutter technology and more.
Explore our Camera Selector Page to find the ideal solution for your needs.
For expert guidance on selecting the camera solution, reach out to us at camerasolutions@e-consystems.com
Prabu is the Chief Technology Officer and Head of Camera Products at e-con Systems, and comes with a rich experience of more than 15 years in the embedded vision space. He brings to the table a deep knowledge in USB cameras, embedded vision cameras, vision algorithms and FPGAs. He has built 50+ camera solutions spanning various domains such as medical, industrial, agriculture, retail, biometrics, and more. He also comes with expertise in device driver development and BSP development. Currently, Prabu’s focus is to build smart camera solutions that power new age AI based applications.