The effectiveness of image processing plays an important role in ensuring the end image quality in an embedded camera module. And often, this process is taken for granted. Many don’t realize that the tiny little steps that you take to finetune image processing make a significant difference in the output. Also, many are not aware of the fundamental concepts that govern image processing and tuning. Hence, in this article, we will break down some of those for you. In addition, we will also have a detailed look at all the processes involved in ensuring you get the best out of a camera.
Understanding Bayer Pattern
Learning image processing will not be complete without understanding what a Bayer pattern is. In general, the output from color camera sensors will be of type Bayer 8/10/12 bits. This output format is called the Bayer pattern.
What is Bayer?
Let us dig a little deeper into Bayer patterns.
A pixel in a processed image will have a combination of all three colors – Red, Green, and Blue. But camera sensors will have a Color Filter Array (CFA) that has the primary colors arranged in different formats like BGGR, GBGR, RGBG, etc., representing each pixel color. This is illustrated in the below image:
Owing to this, the raw output of the sensors will not be a color image, and hence needs to be converted. This process of converting the Bayer image to a fully processed image has multiple blocks involved which is handled by the ISP. The below comparison image visualizes the difference between a Bayer image and the processed image:
What is an ISP (Image Signal Processor)?
An Image Signal Processor (ISP) is used for processing the raw images in camera modules. The ISP performs operations on the captured image such as demosaicing, denoising, and auto functions that help to deliver an enhanced image.
Architecture of an ISP and its functions
Given below is the block diagram of an image signal processor:
Let us now look at each of these functional blocks of an ISP in detail.
Color processing
In a human eye, color is a sensation caused by the activation of cone photoreceptors in the retina. When it comes to a camera, an abstract mathematical model known as color space is used to characterize colors in terms of their intensity values.
Black level
Black level is defined as the level of brightness in no light condition. The sensor will have a noise in the full dark condition which can be corrected using black level adjustment in the ISP. The user-specified black level offset value is subtracted from all the pixels of the input image to have an even black color.
Lens shading correction
Lens shading refers to the gradual reduction of an image’s brightness or saturation from the center to the corners. It is also known as lens vignetting. De-vignetting is the process of making the brightness of the image around the corners/edges even by keeping the center as a reference. The below comparison shows an image with lens vignetting before and after ISP calibration/correction:
Bad pixel correction
Bad pixel (or hot pixel) refers to a pixel that contains no valuable information and produces anonymous value. Filtering of hot pixels is done by averaging the surrounding pixel values.
HDR (High Dynamic Range)
Dynamic range refers to the total amount of light being captured from a particular scene. If a captured image contains a lot of bright areas along with many dark areas covered in shadow or dim light, the scene could be described as having a high dynamic range (HDR). Tone mapping is the technique used to capture and map HDR scenes. It has two types:
- ATM (Adaptive Tone Mapping) – adjusts the tone mapping curve according to the scene.
- LTM (Local Tone Mapping) – locally adjusts contrast in order to provide more details in dark and bright areas.
The below image shows the comparison of a scene taken in normal and HDR modes. You can clearly see that certain areas of the scene are washed out while captured using a normal camera (or normal mode). On the other hand, the image on the right side captured using an HDR camera (or HDR mode) has produced all the details of the scene.
Fig 4 – Normal Mode vs HDR Mode
Pixel processor
The pixel processor compensates the imbalances of colors observed in the pixels (such as the Denoising, demosaicing, white balance, and sharpening are the subprocesses included in pixel processing of the ISP.
Denoising
Image noise can be defined as the appearance of undesired traces and variations in the brightness or color of the image. The amount of random noise increases with shorter exposure times and higher analog or digital gains. Effective noise removal is required to render visually pleasant images.
Demosaicing
In an image sensor, each pixel records only one of three colors (Red, Green, Blue) filtered by the Bayer filter mosaic. The ISP interpolates a set of complete red, green, and blue values for each point. Demosaicing is, basically, a part of the image processing pipeline used by ISPs to reconstruct a full-color image from the Bayer pattern images. There are multiple demosaicing algorithms used in general such as bilinear interpolation, nearest neighbour, bicubic, etc.
White Balance
White balance ensures that actual colors of the target scene are reproduced under varied lighting conditions. To know more about white balance and its calibration please read the blog post Auto white balance calibration for an embedded camera lens.
Sharpening
Camera sensors, lenses, and a few other ISP operations always blur an image to some degree which requires correction in the edges to improve the quality of the image. This is achieved by the sharpening feature of the ISP.
Gamma correction
Gamma correction compensates the luminance of a pixel’s numerical value and it’s actual luminance.
Re-Sampler and crop
The multiple resolutions supported in a camera are made possible using the re-sampler/crop which will crop the actual sensor image to the required aspect ratio and resolution settings by performing appropriate downscaling or upscaling.
Color conversion
Color conversion uses configurable color conversion matrix to output the image in the desired format. By default, YUYV, YCbCr and a few more output formats are supported.
TintE™ – e-con Systems’ FPGA-based Image Signal Processor
e-con Systems has 20+ years of experience in designing, developing, and manufacturing OEM cameras, with specialized expertise in fine-tuning ISPs from multiple vendors. We recently developed our own ISP called TintE™, which provides exceptional image enhancement.
TintE™, an FPGA-based ISP that enhances camera image quality with a complete, turnkey ISP pipeline. It features optimized, customizable blocks such as debayering, auto white balance (AWB), auto exposure (AE), and gamma correction, among others, to provide a world-class imaging pipeline.
TintE™ can be implemented across various FPGA platforms, from cost-sensitive solutions to high-performance SoCs, while maintaining excellent image quality. With its user-friendly design and high optimization, TintE™ delivers outstanding imaging performance.
We also offer extensive customization to meet the specific image processing needs of different applications, ensuring optimal results across a variety of use cases.
Currently, two of our cameras—See3CAM_50CUG and See3CAM_CU200 — are equipped with this world-class TintE™ISP.
Watch e-con Systems’ TintE demo
If you are looking for the right camera for your application, the Camera Selector is where you can find our full portfolio of products.
If you need help integrating cameras into your products, please write to camerasolutions@e-consystems.com.
Vinoth Rajagopalan is an embedded vision expert with 15+ years of experience in product engineering management, R&D, and technical consultations. He has been responsible for many success stories in e-con Systems – from pre-sales and product conceptualization to launch and support. Having started his career as a software engineer, he currently leads a world-class team to handle major product development initiatives