Greetings.
I am following the instructions to get a custom yolov5 converted to rknn for use with a Luckfox Pico max.
All the versions are according to spec and when i try to convert any yolov5, including the default coco one, i am getting illegal isntrction/core dump errors.
See messages bellow
(rockchip-rknn) stelios@Dev-VirtualBox:~/projects/luckfox/rknn_model_zoo/examples/yolov5/python$ python convert.py yolov5s.onnx rv1106
W __init__: rknn-toolkit2 version: 1.6.0+81f21f4d
--> Config model
done
--> Loading model
W load_onnx: It is recommended onnx opset 19, but your onnx model opset is 12!
W load_onnx: Model converted from pytorch, 'opset_version' should be set 19 in torch.onnx.export for successful convert!
Loading : 100%|█████████████████████████████████████████████████| 135/135 [00:00<00:00, 5606.30it/s]
done
--> Building model
W build: found outlier value, this may affect quantization accuracy
const name abs_mean abs_std outlier value
onnx::Conv_371 0.83 1.39 14.282
Illegal instruction (core dumped)
Any idea what the problem is ?
Rknn model zoo fails during conversion with core dump message
Hello, it seems that the input parameters for your command are incorrect and lack the quantization option. Here is the command I executed in the same directory based on the official documentation for your reference.
Code: Select all
python convert.py ../model/yolov5s_relu.onnx rv1106 i8 ../model/yolov5s_relu.rknn
Thanks for the response.
The problem was that the ubuntu VM i was using was missing the AVX cpu command and most/all of AI software these days requires it (tensorflow , pytorch etc)
Enabling the command resolved it.
I think there should be a mention of it on the wiki, as during my searches i found a lot of people with the same problem
The problem was that the ubuntu VM i was using was missing the AVX cpu command and most/all of AI software these days requires it (tensorflow , pytorch etc)
Enabling the command resolved it.
I think there should be a mention of it on the wiki, as during my searches i found a lot of people with the same problem
Thank you for your feedback. We will update the WIKI after verification.