Our Product InsightsUpdates

Make your existing NVIDIA® Jetson Orin™ devices faster with Super Mode

The NVIDIA Jetson Orin™ Nano Super Developer Kit, with its compact size and high-performance computing capabilities, is redefining generative AI for small edge devices.

NVIDIA® has dropped an exciting new update to their existing Jetson Orin™ product line just in time for the holiday season. It is called the super mode, which boosts the internal clocks of the NVIDIA® Jetson Orin™ Nano and NX to increase AI performance and memory throughput further.

Fig 1: Details of Jetson Commercial Modules Performance with Super Mode (Source: NVIDIA®)

 

Users can expect to see up to 1.7 times improvement (depending on the AI model and other system conditions) in their AI workloads with their existing system with just a software update. The software update to JP 6.1 rev 1 allows users to set MAXN power mode on Orin™ Nano and Orin™ NX devices.

Fig 2: Performance Improvement of NanoOwl ViT Model with Super Mode On (Source: NVIDIA®)

 

We tried to benchmark the increased performance that is possible with a small test setup running the YOLOv8n model for object detection on an Orin™ Nano running JP 5.1.2 vs. JP 6.1 with super mode enabled. The test results will be updated with more details later.

These are the configuration details of test case 1:

  1. Device: NVIDIA® Orin™ Nano dev kit
  2. OS: Jetpack 5.1.2 (L4T 35.4.1)
  3. Power mode: 15W
  4. Demo App: Object Detection using the YOLOv8n model
  5. Model input resolution: 384×640
  6. Camera used: See3CAM_CU81 running at 1280×720 (8MP HDR Camera)

This is what we observed:

  1. Average Inference time: 21.3ms
  2. Minimum inference time: 19.2ms
  3. Maximum inference time: 23.1ms

Fig 3: Screenshot Showing JTOP and Camera Preview with YOLO Object Detection Running in Real Time

 

Next, we updated the same system to JP 6.1.2 and modified the power mode as follows:

  1. Device: NVIDIA® Orin™ Nano dev kit
  2. OS: Jetpack 6.1 (with super mode on)
  3. Power mode: MAXN
  4. Demo App: Object Detection using YOLOv8n model
  5. Model input resolution: 384×640
  6. Camera used: See3CAM_CU81 running at 1280×720

These are our observations with super mode enabled in a more complex scene:

  1. Average Inference time: 18.3ms
  2. Minimum inference time: 17.2ms
  3. Maximum inference time: 19.3ms

That is about a 3ms reduction in average inference time on a more complex scene. All these test results are without any other optimizations done and only with supermode on. With further optimization techniques such as TensorRT, we can get even more performance from the same system.

Fig 4: Screenshot with JTOP, Inference Time, and Camera Preview with YOLO Running in Realtime with Super Mode On

 

With the increased bandwidth and AI computing of existing kits, users can process more frames per second from their existing camera setup and systems. Now that’s a great Christmas gift for your team. Happy computing!

Disclaimer: These are preliminary test results. This blog will be updated later with more comprehensive details.

Related posts