e-con’s new edge AI camera for automating pre-analytic tasks in in-vitro diagnostics

In-vitro diagnostic (IVD) tests are an essential part of medical diagnosis and treatment processes. They involve testing and analyzing samples from the human body such as blood, mucus, or saliva in order to treat existing conditions as well as to predict the future risk of any medical abnormalities. In-vitro stands for ‘in-glass’ which means that tests are carried out using equipment made of glass such as test tubes and glass slides. The tests done to diagnose COVID-19 are great examples of IVD analyses.

Cameras have played a key role in automating and expediting the process of in-vitro diagnostics. From digital microscopes to spectrophotometers, there are plenty of camera-enabled IVD devices used in clinical and medical testing.

In addition to the actual analysis, there might be intermediate steps involved in in-vitro diagnostic procedures where a camera can be used to reduce human labor and improve the speed of diagnosis. An example of this is test tube classification where the nature of the sample inside a test tube is recognized by identifying the color of the cap using a camera. Other examples include liquid-level detection and barcode decoding on trays or racks.

In this article, we will have a deep look at e-con Systems’ edge AI camera solution for the test tube classification use case and learn why it is the perfect fit for AI-based in-vitro diagnostic applications.

e-CAM512 USB – 5MP edge AI camera for in-vitro diagnostics

The solution e-con has specially designed for AI-based pre-analytic tasks in in-vitro diagnostics is e-CAM512 USB – a 5MP smart USB camera based on the AR0521 sensor from onsemi. We will soon see this product in action in an actual demonstration of real-time test tube classification. But before that, let us have a detailed look at the key features of this edge AI camera from e-con Systems.

e-CAM512_USB is a USB 2.0 UVC-compliant camera capable of streaming VGA @ 30 fps and 960p @ 10 fps in the YUV format. In the grayscale Y8 format, it can stream VGA @ 60 fps and 960p @ 20 fps. It has an M12 lens holder, which provides an easy way to change lenses as per your end application requirements.

This AI camera comes with an onboard image signal processor that is used for debayering, color correction, tonal balancing, noise reduction, and autoexposure & auto white balance algorithms – making it a highly optimized camera solution. Its Image quality controls include brightness, contrast, sharpness, and gamma to help post-process the video output. Exposure, white balance, and sensor gain can also be manually tuned as required.

Further, this smart camera is powered by the i.MX RT1170 processor from NXP which makes it truly an edge AI-capable device. The i.MX RT1170 crossover MCUs are part of the EdgeVerse™ edge computing platforms and are setting speed records at 1 GHz. The dual-core processor runs on the Arm® Cortex®-M7 core at 1 GHz and Arm Cortex-M4 at 400 MHz. In addition, i.MX RT1170 offers support over a wide temperature range as well. Further, for AI, e-CAM512_USB supports Tensor Flow lite micro, DeepViewRT, and Glow inference engines for neural network deployment.

With e-con’s machine learning infrastructure, product developers get to directly capture images for data sets and test & benchmark the trained models directly on the camera. This makes e-CAM512_USB a readily deployable solution for all the pre-analytic tasks in in-vitro diagnostic devices.

Seeing machine learning models in action on e-CAM512_USB

Next, let’s have a quick look at some benchmarking data and see how some of the standard machine learning models perform on e-CAM512_USB.

The Mobilenet V1 model with an input image size of 32×32 and alpha of 0.5 has an inference time of about 18.5ms in int8 and 97.4ms in float32. Many basic detection tasks like checking the presence of an object in an image can be done with this model.

On the other hand, Mobilenet V2 – a more improvised model compared to V1 – with an input image size of 32×32 and alpha of 0.5 has an inference time of about 22.9ms in int8 and 89.1ms in float32.

The latest Mobilenet model V3-small with an input image size of 32×32 and alpha of 0.5 has an inference time of about 22ms in int8 and 60ms in float32. For models with a higher input size like 320×320 and alpha of 1, the inference time is about 0.9 sec.

Now coming to the Mobilenet V3-large model. With an input image size of 32×32 and alpha of 0.5, it has an inference time of about 36.9ms in int8 and 112ms in float32. For models with a higher input size like 320×320 and alpha of 1, the inference time is about 2.7 sec.

Demonstration of test tube classification using e-CAM512_USB

Now that we understand the features of e-CAM512_USB and the machine learning models used on it, we will see the product putting these into action for a test tube classification process. Given below is the demo video for the same:

As you learned from the video, the use case involves the detection of the vial caps’ color, which helps to identify the sample inside the vials.

For this demonstration, the e-CAM512_USB camera was placed at about 50cm from the vials. The same camera was used to capture the initial data set for training. The trained model was then deployed on the camera to automatically identify the color of the vial caps.

For training the model, images were captured under different lighting conditions and different vial positions. About 1500 images were captured, augmented, and fed into the training process. As seen in the video, the model is able to predict and display colors in real-time.

The ML model at work here is a constrained object detection model which is based on the centroid of the object. High-resolution images of size 720 by 720 pixels are captured using the camera and resized in the PXP block of the RT1170 processor. The PXP also changes the format from YUV to RGB, which is then fed into the inference engine. The input to the model is a 96 by 96 RGB image.

For this, we initially trained a custom object detection model based on Mobilenet V2 with a single shot detector and the inferencing took about 2.4 sec. With the constrained object detection model, we reduced the inference time to 32 millisecond on average. This is just one of the many use cases for which e-CAM512_USB can be used.

Tools and accelerators offered by e-con Systems to run the inference engines

e-CAM512_USB comes with a mode to configure the camera at a resolution that is used to train the actual model. This means that the data collection can happen with the same camera itself. This in turn means that customers at any stage of their development – be it data collection or if they already have a pre-trained model – can use this camera for their AI-based use case.

After data collection, the model can be trained and optimized using NXP’s eiQ toolkit (or any other toolkit you are familiar with), resulting in a Tensor flow lite micro model. Then using e-CAM512_USB, product developers can easily benchmark and test their models with our easy-to-use tools that come with the camera. The tool for benchmarking enables you to get a true sense of your model’s performance, without having to test it in a real-world scenario.

After benchmarking and optimizing the Tensor flow lite micro model, you can use e-con’s model loading software to run the models and get real-time inferencing information. We provide detailed documentation to use these tools and program the camera. We will also be extending full support to all our customers during the development as well as the integration phase.

Hope you got a good understanding of how e-CAM512_USB is the perfect edge AI camera for in vitro diagnostics. If you wish to learn more about the product and its features, please visit the product page. You could also write to us at camerasolutions@e-consystems.com in case you are interested in integrating the product into your diagnostic device.

Related posts

How to Choose the Right Camera for Automotive Applications

How to Choose the Right Surround-View Camera for Vehicles

The Ultimate Guide to Golf Simulator Cameras: Enhancing Your Indoor Golf Experience