In its simplest terms, latency refers to a “delay” in the time taken for information to reach from one point to another. In video streaming, this translates into the time interval between capturing an image and sending it to the end-user’s screen. Latency streaming is measured in units of time – and it is common knowledge that the higher the latency – the more fractured the video streaming experience due to disruptive delays.
For example, many people face this issue while using video conferencing platforms wherein the conversation is affected by high latency issues. You may have even seen it happen during live news broadcasts where it seems as though the anchor and the guest are having two separate conversations due to these delays.
When it comes to embedded cameras, high latency can lead to failure of the whole system, especially autonomous vehicles that need to make decisions based on the captured image and video data. In this edition of Technology deep dive, let’s go deep into the world of low latency camera streaming – dig out what challenges you need to overcome and the major embedded vision applications that require it.
What is low latency camera streaming – and why is it important?
The degree of latency has become a competitive differentiator for embedded vision applications. Low latency camera streaming ensures that there are only negligible lags, if any at all while capturing, sharing, and receiving the imaging information. While there are no standards for what constitutes the perfect low latency rate, there are accepted best practices.
If in case, the streaming is time-sensitive, a high latency could lead to the embedded vision application being rendered ineffective. For instance, let’s take real-time patient monitoring devices that rely on low latency camera streaming. Any delay in sharing the visual information from the patient monitoring camera installed near the bedside of patients to the device being used by doctors, clinicians, or nurses could potentially lead to a life-threatening situation.
By reducing the streaming latency as much as possible, you can create near real-time video experiences that can make your embedded vision applications a powerful tool in many industries, which we shall look at a bit later in this article.
How does low latency camera streaming work?
Video streaming is a multi-layered process that starts with the camera capturing and processing the live video. Next, the file is sent to the encoder for transcoding. Finally, it is transmitted to end-users whose devices decode the imaging information and display it. So, delays may occur during any of these steps. It’s why you should be aware of the factors that may get in the way of achieving low latency camera streaming.
4 factors that affect low latency camera streaming
- Bandwidth: It’s a no-brainer since this determines how much data can be transferred per second. Hence, if your network has a high bandwidth, you’re one step closer to low latency streaming!
- Connectivity: Depending on how you transmit the data (optic fiber, WAN, Wi-Fi, etc.), the speed at which your imaging information can be shared and received will vary. If you’re using a GMSL camera, using a single coaxial cable – about 15 to 20 meters away from the host processor – can provide your embedded camera with a lesser latency rate.
- Distance: This is a geographical issue since the longer the distance to which imaging data needs to be transmitted, the higher the latency of the streaming video.
- Encoding: To achieve low latency streaming, you must align the encoder with your video streaming protocol to avoid delays. Otherwise, you will end up compromising on the speed of transmission.
- Video format: If the size of your video file is huge, sharing it via the internet will inevitably cause high latency issues. Optimized file sizes are highly recommended to reduce latency. However, it may lead to compromised video quality unless you strike the right balance.
Top embedded vision applications that depend on low latency camera streaming
Low latency streaming can bring out the best in your embedded vision applications as they transmit timely information while reducing user experience gaps. People who have participated in online bidding or used video game streaming are already aware of its immense benefits – as an extra second delay could be the point of no return.
Here, let’s look at the embedded vision applications where achieving low latency can be a defining moment.
Low latency streaming in video conferencing
An effective video conferencing camera comes equipped with low latency capabilities. It’s more critical than ever during these pandemic times, with remote workforce collaboration becoming the new normal. Remote learning platforms are also widely accepted as BAU, with online education gaining unprecedented momentum. So, high latency issues can lead to prolonged pauses and broken transfer of information – thereby decreasing interactivity levels and even causing user frustration. It has been documented that the latency of video conferencing devices should be around the 200-millisecond mark (or as near real-time).
Low latency streaming in quality inspection and monitoring
Advanced embedded camera systems are deployed in manufacturing premises to manage inventory, drive quality assurance, etc., with inbuilt monitoring features. For example, if the assembly line is producing a huge volume of products on a daily basis, then the camera in the monitoring system should come with low latency. It helps identify defects or other quality issues by quickly processing and sharing the imaging information.
Low latency streaming in autonomous mobile robots and vehicles
Low latency streaming is critical in autonomous vehicles. This is especially true in auto farming cameras – as the industry has realized the value of enabling precision agricultural practices. From irrigation and crop growth tracking to pest control, low latency can add more power to embedded vision applications. So is the case with autonomous mobile robots. Robot cameras need to do perform tasks like obstacle detection, object recognition, surround view etc. For these functions, low latency streaming becomes very critical.
Learn how e-con Systems helped a leading Autonomous Mobile Robot manufacturer enhance warehouse automation by integrating cameras to enable accurate object detection and error-free barcode reading.
Low latency streaming in remote patient monitoring
In IoT-enabled patient monitoring devices, low latency streaming can be useful – as previously mentioned – since it determines the speed at which a medically trained person can respond to the needs of a patient.
The applications mentioned here are some of the most popular applications that require low latency streaming. It is to be noted that there are many more camera-based applications where latency is critical. Some examples are fleet management, street lighting in smart cities, telemedicine, intelligent traffic systems, etc.
e-con Systems offers high-resolution cameras with integrated low latency camera streaming
Backed by nearly two decades of experience, e-con Systems has designed several off-the-shelf and customizable cameras that offer low latency capabilities to your embedded vision applications. One of e-con’s differentiators is its ability to integrate its cameras with a wide variety of ARM platforms using different interfaces while still ensuring low latency and superior performance. The majority of e-con Systems’ MIPI cameras, USB cameras, and GMSL cameras come with the low latency feature.
If you’re interested in integrating low latency embedded cameras into your products, kindly write to us at camerasolutions@e-consystems.com. Visit our Camera Selector page to see our entire camera portfolio.
Prabu is the Chief Technology Officer and Head of Camera Products at e-con Systems, and comes with a rich experience of more than 15 years in the embedded vision space. He brings to the table a deep knowledge in USB cameras, embedded vision cameras, vision algorithms and FPGAs. He has built 50+ camera solutions spanning various domains such as medical, industrial, agriculture, retail, biometrics, and more. He also comes with expertise in device driver development and BSP development. Currently, Prabu’s focus is to build smart camera solutions that power new age AI based applications.