Page 1 of 1

RKNN Model Inference example not working Memory error

Posted: 2024-01-18 11:37
by Robbal
Hi

I cant get the https://wiki.luckfox.com/Luckfox-Pico/L ... RKNN-Test to run on my PICO MAX board. I have followed the instrutions and I get the following error when i try and execute the program on the device

Code: Select all

E RKNN: failed to allocate fd, ret: -1, errno: 12, errstr: Cannot allocate memory
E RKNN: failed to allocate model memory!, size: 13977280, flags: #a
rknn_init fail! ret=-1
Does anyone know where i should look to fix this

Thanks

Robby

Re: RKNN Model Inference example not working Memory error

Posted: 2024-01-18 11:52
by Robbal
Looks like the default firmware works but if i build a firmware it fails..

Not sure yet

Re: RKNN Model Inference example not working Memory error

Posted: 2024-01-19 3:35
by Luckfox Taylor
Run 'htop' to check background processes and identify files consuming a significant amount of memory. If certain processes like Samba and Python are not essential, consider temporarily terminating their associated processes. Afterward, observe the system's performance.

Re: RKNN Model Inference example not working Memory error

Posted: 2024-01-19 8:05
by Robbal
Thank you

Re: RKNN Model Inference example not working Memory error

Posted: 2024-01-19 13:51
by Robbal
Humm .. Got it down to as low as possible and I still have this problem. Im not sure now if i am missing Lib or something

Re: RKNN Model Inference example not working Memory error

Posted: 2024-01-20 3:45
by Luckfox Taylor
If the official website's image and model are used, there should not be any memory errors because MAX has 256MB DDR3, and Pro has 128MB DDR3. Running models on Pro should not result in any errors. Please carefully review the steps. Finally, we recommend that you provide the execution process, steps, and results. This way, engineers can more easily help you troubleshoot the issue.

Re: RKNN Model Inference example not working Memory error

Posted: 2024-01-21 16:15
by Robbal
Luckfox-pico MAX
Linux 22.04
Offiicial SDK

I am using a MAX and if I use the official firmware, build and run the RKNN Model, everything works.
If i use Buildroot and build my own firmware(only adding one python module) then RKNN Model does not work. (https://wiki.luckfox.com/Luckfox-Pico/L ... -Buildroot

It looks like the RKNN Model will only run on the offical firmware and nothing else. This is a bit dissapointing as if I add any library or module it cant run inference. The official firmware is limited for me if i cant add a module.

If you like i can link my Firmware to test.

Thanks

Robby

Re: RKNN Model Inference example not working Memory error

Posted: 2024-01-22 1:58
by Eng38
Hello,

Please try adjusting the RK_BOOTARGS_CMA_SIZE to a value greater than 17M and then perform the test again.:
企业微信截图_17058885814859.png

Re: RKNN Model Inference example not working Memory error

Posted: 2024-01-22 19:01
by Robbal
Wow! thank you . That worked. :lol: Can you explain Why that worked. I thought that
RK_BOOTARGS_CMA_SIZE was only for the camera


Many Many thanks

Robby

Re: RKNN Model Inference example not working Memory error

Posted: 2024-01-23 1:25
by Luckfox Taylor
The system allocates a portion of memory for the camera, with the default being 66MB and a minimum of 17MB. When reducing the allocated memory for the camera, a portion of the memory can be released.