Time-of-Flight (ToF) cameras are powerful embedded vision solutions that provide real-time depth measurement – predominantly for applications that require autonomous and guided navigation. As you may know, ToF sensors use the light illumination technique where the depth is measured based on the time it takes for the emitted light to come back from the target object. (To learn more about how the technology works, please visit What is a Time-of-Flight sensor? What are the key components of a Time-of-Flight camera?). In order to achieve this, the entire scene must be illuminated before the differences between the light and the reflection are measured. However, these cameras have a stumbling block towards depth measurement called multipath interference (or multipath reflection) to overcome. In this blog, let’s look at what multipath interference means and how you can overcome it to ensure high performance of your Time of flight camera.
What is multipath interference? Why is it important in Time-of-Flight cameras?
ToF sensors operate based on the theory that every individual pixel follows a single optical path. However, in a practical scenario, multiple illumination paths (from the light source) tend to get mapped to the same pixels. This is known as multipath interference. And this can prove to be extremely detrimental to ToF cameras as it could lead to drastic measurement inaccuracies. To put things in a nutshell, the light gets reflected from the target object and other nearby objects before reaching the camera sensor. It causes multiple paths of illumination – affecting the depth estimation.
The below figure illustrates how multipath interference occurs:
Figure 1 – Multipath reflection
The four instances where multipath interference could happen are:
- Specular interreflections in the scene – this typically happens when there is an opaque object obstructing the path of a light beam back to the sensor.
- Translucent objects – this is caused by the presence of a translucent object or surface in the scene.
- Multi-surface reflections (or diffuse interreflections) – multi-surface reflections are caused when there are multiple obstructions in the light’s path. This phenomenon is more likely to happen when the objects are placed in multiple 2D or 3D planes. This could also happen when the texture of the target object is uneven.
- Ghosting or lens flare – ghosting is the phenomenon of the light bouncing off different lens elements – or even the sensor – when the light from a natural or artificial source directly falls on them.
Let us now try to understand the concept using a practical example. Consider a ToF camera placed on a table as shown in the figure below:
Figure 2 – Multipath reflection caused by a camera placed on a table (image is from the point of view of the camera)
When the TOF camera is placed on the table as shown in the above figure, owing to the reflection from the table surface in the light’s path, the depth map is completely affected. As you can see, even a far object like the wall in the scene is shown as red (red indicates that the object is nearer relative to the other objects in the scene). Since, in these scenarios, several optical paths are juxtaposed at a single pixel, the measurement of depth gets significantly affected. Industry experts also opine that this is one of the leading causes of errors in even the most advanced ToF sensors.
3 ways to minimize multipath interference
Multipath interference is a challenging problem to solve since it involves the combination of multiple light paths. It is close to impossible to completely mitigate it. However, there are ways in which this can be minimized to ensure maximum accuracy of a time of flight camera. Let us look at these methods in detail in this section. This involves setting up the target scene in such a way that interference is minimized. Following are the steps you can take to do this while setting up the camera.
- Before you finalize the position, make sure that any bright objects in the vicinity are removed. If unavoidable, for example – a bright fixed surface, you can use a suitable tripod to place the camera at an elevated level. Please have a look at the below image to understand the difference this can make:
Figure 3 – Multipath interference minimized with camera placed on a tripod (image is from the point of view of the camera)
In the above setup, the camera is placed on a tripod. Here, in the depth map, we can see different color shades indicating different depths in the same scene.
- Ensure the camera is not placed near any translucent objects or in the corner. This helps to avoid deviations of the light path on the way.
- By choosing a lens specifically designed for ToF sensors with the coatings and filters that will help to reduce multipath interference caused by lens flare.
Is e-con Systems developing a Time-of-Flight camera?
If you are interested in knowing more about the advantages of having RGB and depth data in a single ToF camera, please visit the below blog:
https://www.e-consystems.com/blog/camera/technology/advantages-of-having-rgb-and-depth-data-in-a-single-time-of-flight-camera/
For further reading about how an autonomous mobile robot uses time of flight technology, please visit here:
https://www.e-consystems.com/blog/camera/applications/how-does-an-autonomous-mobile-robot-use-time-of-flight-technology/
To learn more about the features, benefits, and target applications of the camera, please visit the product page. We hope this blog helped clear up some of your confusion on how to minimize multipath interference in time of flight cameras. As always, if you need any help in picking and integrating the right cameras into your camera-enabled device, please write to us at camerasolutions@e-consystems.com. Also do checkout the Camera Selector in case you wish to look at e-con Systems’ complete portfolio of cameras.
References
Prabu is the Chief Technology Officer and Head of Camera Products at e-con Systems, and comes with a rich experience of more than 15 years in the embedded vision space. He brings to the table a deep knowledge in USB cameras, embedded vision cameras, vision algorithms and FPGAs. He has built 50+ camera solutions spanning various domains such as medical, industrial, agriculture, retail, biometrics, and more. He also comes with expertise in device driver development and BSP development. Currently, Prabu’s focus is to build smart camera solutions that power new age AI based applications.