How Face-Based Auto Exposure Optimizes Image Quality

In imaging systems, traditional exposure techniques tend to face challenges, especially in complex scenes involving multiple faces or dynamic lighting conditions. It led to the emergence of Face-Based Auto Exposure (AE) technology that focuses on enhancing the exposure of faces within the frame.

This ensures that human subjects are clearly visible regardless of the surrounding environment.

In this blog, you’ll discover how Face-Based Auto Exposure works, its features, and the benefits it offers for modern imaging systems.

What is Face Detection?

The face detection function identifies faces through cameras and optimizes focus, color, and exposure. It includes the following features:

Static face detection

It detects faces when they are stationary. Initially, face detection is disabled. However, if a person is sitting still in the preview and face detection is enabled, it will detect the face.

Moving face detection

It detects faces even when they are in motion. If a person moves continuously within the preview, the system will track and detect the face.

Face enumeration

It identifies and detects new faces that appear in the preview. When a new person enters the preview, the system will detect and recognize their face. Faces are detected in a frontal position and tracked in profile as long as they remain within the frame. Tracking continues when the camera is rotated left or right.

Once a face is “locked,” brief profile changes do not disrupt tracking. The face detection algorithm supports a 60-degree range of rotation in both horizontal and vertical directions. Faces within the field of view can be detected in 33 milliseconds (within one frame at 30 fps). The system can track up to 10 faces simultaneously, ensuring consistent focus and exposure in dynamic environments.

Understanding Face-Based Auto Exposure

Face-Based Auto Exposure is a technique that optimizes the exposure settings of a camera based on identified face regions within the frame. Unlike conventional AE methods that modify exposure uniformly across the full scene, this method focuses on boosting the visibility and clarity of human subjects. The process involves three steps:

  • Detection: The system first identifies faces and marks them as regions of interest (ROIs) using advanced detection algorithms.
  • Calculation: Based on the lighting conditions of these ROIs, the system determines the optimal exposure settings to ensure faces are well-lit without causing overexposure or underexposure of the background.
  • Application: The calculated exposure settings are then applied dynamically, allowing the system to adjust in real time as new faces enter the frame or as lighting conditions change.

How is Exposure Calculated When Multiple Faces Are Detected?

Once multiple faces are detected in a single scene, the system adjusts exposure to ensure that all faces are properly illuminated. One of the below methods can control how the AE value is calculated:

  1. Average of all faces: Exposure is set based on a simple average of the exposure values required for each detected face. This method treats all faces equally, providing uniform illumination regardless of their size or position in the frame.
  1. Intermediate approach: This method strikes a balance between a plain average and a size-weighted average. It slightly prioritizes larger faces while still considering all detected faces. This approach is useful when a mix of close and distant faces is present, ensuring none are too bright or too dark.
  1. Size-weighted average: Here, the exposure is calculated by assigning more weight to larger faces in the frame. Faces that occupy more space influence the overall exposure settings, ensuring that primary subjects are well-lit even if smaller faces are present.

All three options provide flexible and adaptive exposure control, enabling product developers to select the method that best suits the scene’s requirements.

Features of Face-Based Auto Exposure

Territory for face detection

This empowers users to define specific areas within the frame for face detection so that only faces within these designated regions are recognized. This improves the accuracy of face detection and minimizes false positives by focusing detection efforts on targeted areas and ignoring irrelevant regions.

Restricting face detection to a specified territory also enhances the precision of face-based adjustments, such as exposure and focus, by concentrating on the most relevant parts of the frame.

Face age

The face age setting controls how long a detected face remains in the region of interest (ROI) after the detection confidence falls below a certain threshold, referred to as the “keep” threshold. It prevents temporary drops in confidence from immediately removing a face from the detection list, ensuring smoother and more stable face tracking.

The system can recover from brief interruptions or occlusions without losing track of detected faces.

Face confidence

Face confidence refers to the reliability of face detection, determined by how certain the system is about a detected face. It is governed by two parameters:

  1. Adding a new face: Users can specify a confidence threshold that determines when a new face should be recognized and added to the detection list. Only faces that meet or exceed this threshold will be detected, thereby reducing false positives.
  2. Keeping an existing face: A separate threshold can be set for retaining already detected faces. If the confidence for a face falls below this threshold, the system will not immediately stop tracking it. Such delays enable stable tracking by accommodating temporary changes in lighting or pose without losing track of faces prematurely.

Therefore, users can fine-tune the balance between detection accuracy and responsiveness and reliably track faces even in challenging conditions.

Face-based zoom

Face-based zoom automatically adjusts the zoom level to focus on detected faces within the frame. The system determines the zoom level based on a specified zoom factor, ensuring that faces are clearly visible and centered. Dynamically adjusting the zoom means the system makes sure that faces remain prominent and detailed.

Before

After

Key Parameters of Face-Based Auto Exposure

Zoom factor for face

This parameter defines the level of zoom applied to detected faces. Higher zoom factors result in a closer view, making faces appear larger and more detailed in the frame. Adjusting the zoom factor equips users to control how prominently faces are displayed.

Zoom margin

Zoom margin manages the balance between how often the zoom level is updated and how accurately the face is centered in the frame. A higher zoom margin leads to more frequent updates, making the system more responsive to face movement but potentially less precise in centering. In contrast, a lower zoom margin prioritizes accurate centering at the cost of fewer updates.

Face cycle time

Face cycle time refers to the duration required for the system to complete a full scan of the image and detect faces. Shorter face cycle times enable faster detection and more real-time adjustments.

Face count

This parameter indicates the number of faces currently detected within the frame. Monitoring face count allows the system to dynamically adjust exposure, focus, and zoom settings based on the number of subjects.

Overlay rectangle

The overlay rectangle parameter enables a visual rectangle to be displayed around detected faces. It helps users confirm which faces are being tracked and adjusted for exposure, focus, and zoom.

e-con Systems Offers Cameras with Face-Based Auto Exposure

e-con Systems has been designing, developing, and manufacturing OEM cameras since 2003. Here’s a list of our cameras that come equipped with Face-Based Auto Exposure support:

Use our Camera Selector to browse our complete portfolio.

If you need help selecting and integrating the right camera solution into your embedded vision system, please write to camerasolutions@e-consystems.com.

Related posts

Understanding the MTF Graph and Its Key Parameters

How e-con Systems’ M12 VCM Module Equips Autofocus Cameras with Multiple Lenses

How See3CAM_37CUGM’s Self-Trigger Mode Ensures Seamless Automated Capture