Page 1 of 1

How to use the CSI camera and YoloV5 in Python

Posted: 2024-01-21 18:36
by grayfacenospace
Hello,

I have managed to run the Yolov5 demo but I am not quite sure how to use the CSI camera

1) How can I take pictures with the CSI camera and use them inside Python?
2) How can I use a Yolov5 model to run inference on the pictures inside Python?

Thanks in advance!

Re: How to use the CSI camera and YoloV5 in Python

Posted: 2024-01-22 7:46
by Yurii
Can you provide instructions. How to work with the camera?

Re: How to use the CSI camera and YoloV5 in Python

Posted: 2024-01-22 8:13
by grayfacenospace
Currently I have flashed the buildroot image on the Pico Max. After that I was able to look up its IP address on my router and then able to use VLC (only on Windows) to stream the RTSP demo stream with detection.

I have also managed to compile the yolov5 demo and run it on a static image on the Pico.

Now my question is how to use both the camera and the Yolov5 model inside a Python program on the Pico

Re: How to use the CSI camera and YoloV5 in Python

Posted: 2024-01-22 8:15
by Luckfox Taylor
For the camera, there are currently no Python sample programs. Initially, the Rockchip SDK did not support Python programming, and it's even less likely to have Python sample programs for the camera. The current camera support includes:

1. RTSP streaming in the buildroot system.
2. RK sample programs.
3. V4l2-ctrl.

Re: How to use the CSI camera and YoloV5 in Python

Posted: 2024-01-22 9:19
by grayfacenospace
That is a shame to hear, I thought since the board had Python support that it would be able to do camera capture and inference in Python. This is a major use-case for many people, and the reason that I bought the board.

Are there any example C++ programs for capturing the camera image and running inference, or does the board only run demo software?

Re: How to use the CSI camera and YoloV5 in Python

Posted: 2024-01-22 10:01
by Luckfox Taylor
After cross-compiling, the packaged files are stored in the path /oem/usr/bin on the development board.
The source code directory for the sample programs is luckfox-pico/media/samples.

Reference document:https://wiki.luckfox.com/Luckfox-Pico/Datasheets
Currently, there is no English version available.

Re: How to use the CSI camera and YoloV5 in Python

Posted: 2024-01-22 12:50
by grayfacenospace
Thank you for the fast response.

I am assuming that this is the code for inference https://github.com/rockchip-linux/rknpu ... rc/main.cc

And this seems to be the code for grabbing a frame from the camera(?) https://github.com/LuckfoxTECH/luckfox- ... et_frame.c

Sadly I am not knowledgable enough in C to modify the program to grab a frame from the Camera instead of read an image. There seems to be a lot of pointer stuff going on which I don't understand.

If someone could make a demo program that
- Loads a Yolov5 RKNN model
- Enters a while True loop
- Grabs image
- Runs inference on the image using the YoloV5 model and prints the result

That would be very helpful.

I hope that the board could support Python for the camera and for model inference in the future. That could make it a very popular choice in the maker community.

Re: How to use the CSI camera and YoloV5 in Python

Posted: 2024-01-23 7:06
by Eng38
Hello,

For implementing C language code to perform inference using the YoloV5 model on images and subsequently print out the results, please refer to the following tutorial: https://wiki.luckfox.com/Luckfox-Pico/L ... RKNN-Test/

To obtain frames from a video stream and save them as images, you can consult this tutorial: https://wiki.luckfox.com/Luckfox-Pico/L ... cv-mobile/

Re: How to use the CSI camera and YoloV5 in Python

Posted: 2024-01-24 13:19
by Robbal
I have the same problem . Opencv does not give very many examples and I cant get this to work. What i have had a bit of success with is using ffmpeg saving frames and then running inference on the frames but it still misses frames. Sometime I wish the docs where better.
grayfacenospace wrote: 2024-01-22 12:50 Thank you for the fast response.

I am assuming that this is the code for inference https://github.com/rockchip-linux/rknpu ... rc/main.cc

And this seems to be the code for grabbing a frame from the camera(?) https://github.com/LuckfoxTECH/luckfox- ... et_frame.c

Sadly I am not knowledgable enough in C to modify the program to grab a frame from the Camera instead of read an image. There seems to be a lot of pointer stuff going on which I don't understand.

If someone could make a demo program that
- Loads a Yolov5 RKNN model
- Enters a while True loop
- Grabs image
- Runs inference on the image using the YoloV5 model and prints the result

That would be very helpful.

I hope that the board could support Python for the camera and for model inference in the future. That could make it a very popular choice in the maker community.