A quick guide to selecting the best-fit processor for your robotic vision system

Undoubtedly, the camera solution can make or break robotic vision systems given that one of the primary tasks of most of the robots is to provide high-quality sharp images. But just as important are the processors used to process the images obtained from these cameras. Selecting a processor that is best suited for your robotic application’s requirements is easier said than done. It requires a thorough understanding of what a processor can do – and how it can be utilized to increase the performance of your robotic vision systems.

From NVIDIA to Intel and Google, some of the largest players in the world offer state-of-the-art processors. Let’s s look at the key features of these processors and how the most popular ones add incredible value to robotic vision systems.

Different types of processors used in robotic vision systems

NVIDIA® Jetson™ processor family

NVIDIA®’s Jetson family of processors is a powerful solution for the development of embedded AI applications including robotics. NVIDIA® Jetson™ is a series of embedded computing boards designed for accelerating time to market. These processors span from affordable options such as the Jetson Nano to the more advanced AGX Xavier with varied computing performance ranging from 0.472 TFLOPS to 32 TOPS. They can be used for relatively low-end robotic applications to extremely complex edge AI and deep learning-based applications.

Intel® Atom processor family & FPGAs

This low-powered Atom series of processors from Intel provides a good balance of performance and power for low-end compute applications. Intel FPGAs are recommended when high speed and very low latency I/O are required. They can be used for applications which require a customized architecture and faster acceleration of business logic. The Intel Myriad X from the Movidius™ family is an edge AI accelerator which allows to perform deep learning at the edge with its 1 TOPS of AI compute power.

Google Coral edge Tensor Processing Unit (TPU)

The Edge TPU from Google is a custom ASIC designed for edge AI applications. It is designed to complement a host CPU for accelerating AI workloads while the main CPU performs the business logic. The Google Coral Edge TPU accelerator is available in USB and PCIe variants to enable integration with a variety of host processors. Its AI compute performance of 4 TOPS with a power budget of 2 Watts allows it to be used in many edge-based robotic vision systems.

NXP i.MX series of application processors

Based on the ARM architecture, the NXP i.MX series of processors can be used for creating power-efficient robotic vision systems. Because of their capability to interface with multiple MIPI CSI2 based cameras as well as perform edge computing, they make a good choice for building intelligent solutions for robots and industrial use cases. i.MX7 series of application processors are great for building low-end solutions, while the i.MX8 family of applications are much more powerful and can be used for much more advanced visual inspection solutions.

TM Xilinx MPSoCs & FPGAs

With a wide portfolio of FPGAs, SoCs and MPSoCs, Xilinx caters to both the low-end and high-end industrial requirements. The heterogeneous embedded processing capabilities of the Zynq UltraScale + MPSoCs allow for complex use cases. They can be used to run business logic on the ARM core as well as the FPGA core in a single die. This provides a lot of flexibility and scalability to customize vision guided robots. The recently launched Kria K26 SOMs are powerful enough to run deep learning workloads on the edge at a low power budget. It allows for configurable AI performance using the Vitis AI toolkit.

It’s evident that there’s no dearth of world-class choices when it comes to processors for robotic systems, given the credibility that each of these technology providers brings to the table. Picking the right processor however is entirely dependent on the type of industrial use cases that you are looking to transform through robotic vision. Hence, it is crucial to know what are the available options so that you are able to bring the best out of your robotic vision systems.

If you are looking to learn more about how to smoothly integrate cameras into your robotic systems, here are a few resources that might interest you:

e-con Systems’ camera solutions for robots

e-con Systems™ being a pioneer in the embedded vision space has built numerous camera solutions for robotic systems. Following is a list of some of them:

If you’re looking for a helping hand in integrating cameras smoothly into your robots or have any query around picking the right robotic vision system components, drop a note to sales@e-consystems.com. Our experts will be happy to assist you.

Related posts

Customize e-con Systems’ FPGA IP Cores to Meet Unique Vision Needs

What is Interpolation? Understanding Image Perception in Embedded Vision Camera Systems

How e-con Systems’ TintE ISP IP core increases the efficiency of embedded vision applications