Frequently asked questions (FAQ)
- 1. If I have my own dataset, will e-con help to develop an ML (Machine Learning) model for the application using EdgeECAM50_USB?
- 2. What would be the typical power consumption of the EdgeECAM50_USB camera?
- 3. Can you explain the ML workflow using the EdgeECAM50_USB camera and the toolkits to develop an ML application?
- 4. How can I get the inference output from the EdgeECAM50_USB camera?
- 5. In EdgeECAM50_USB, how much of the actual 5MP resolution from the camera is used for the model?
- 6. Can I run facial recognition with EdgeECAM50_USB?
- 7. What are the frameworks other than Tflite micro that can be used to build a model, and how to load them to the camera?
1. If I have my own dataset, will e-con help to develop an ML (Machine Learning) model for the application using EdgeECAM50_USB?
Data is the single most important entity for a good ML-based CV algorithm. For customers having the right dataset for the actual end application, e-con Systems would be glad to help build a successful ML solution.
2. What would be the typical power consumption of the EdgeECAM50_USB camera?
It depends on the resolution selected and the complexity of the model which we use. For example, for the application we demonstrated in the recent webinar we conducted on the topic “Simplifying product development by leveraging edge AI cameras”, the power consumption was around 0.286A @ 5V. So far, we have seen a maximum power consumption of 0.465A @ 5V.
3. Can you explain the ML workflow using the EdgeECAM50_USB camera and the toolkits to develop an ML application?
First is the data collection part. You can select a custom resolution, format (Grey or RGB,) and if required, region of interest as well. Once this is done, you can start collecting your dataset.
With this dataset, you can develop an ML model based on your application requirements with a framework or tool of your choice. The device and toolkits provided support primarily the TFLite Micro model. If you choose a framework other than TensorFlow, there are multiple open-source libraries to convert the model to the TFLite Micro model.
You can benchmark the model with benchmarking tools provided to know the profiling details which can be used to optimize the model. The optimized model can be deployed to the camera using the model loading tool.
4. How can I get the inference output from the EdgeECAM50_USB camera?
The inference output from the camera can be retrieved through the HID extension interface of the same USB 2.0 connection. The output length and frequency can also be configured based on the end application.
5. In EdgeECAM50_USB, how much of the actual 5MP resolution from the camera is used for the model?
The inbuilt ISP in the camera would resize the available 5MP from the sensor to match the required resolution for the model. In case you need detection only in a particular region of the image, there is also an option to select the region of interest and the camera automatically crops that region for the model input.
6. Can I run facial recognition with EdgeECAM50_USB?
Yes, less complex facial recognition tasks like face detection, gender identification, and emotion detection can be done with this camera. More complex tasks like facial identification or matching can be left for rest of the platforms mentioned earlier.
7. What are the frameworks other than Tflite micro that can be used to build a model, and how to load them to the camera?
Apart from Tflite micro, for Deepview RT and Glow-based inferencing models, customization support shall be provided for customers to integrate their model into EdgeECAM50_USB.