-
Notifications
You must be signed in to change notification settings - Fork 664
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
tensorrt_yolo
sample yolov5 model throw error on inference
#1647
Comments
@HaoruXue there was previously a discussion about prompting the user to download the necessary files required by any ML models or inference frameworks. A similar solution could be provided for this bug to prevent the CUDA error. Check autowarefoundation/autoware#2508 |
I think .engine file will be automatically created if you specify the onnex file in the launch file. |
@mitsudome-r I tested not converting the onnx model in advance and now it works. Maybe it is something worth documenting that the package would download model directly and convert onnx upon first launch |
After discussing with Mitsudome-san I'll submit a PR for documentation changes in |
@HaoruXue @mitsudome-r |
@wep21 if my understanding is correct, currently you need to run the node once to convert the onnx to tensorrt engine. For the sake of deployment are there alternative ways to do it that makes this happen at a earlier stage? Maybe running a converter script in the build process? |
…1647) * set global param to override autoware state check * change variable to be more general * add comment * move param to control component launch * change param name to be more straightforward --------- Signed-off-by: Daniel Sanchez <danielsanchezaran@gmail.com>
Checklist
Description
The tensorrt_yolo package links to a couple of YoloV5 ONNX models. I converted the yolov5l model to .engine and run it, but the node throws error immediately:
compute-sanitizer
reports the illegal memory access comes fromenqueueV2
on line 304. A quick Google around and this is a known old issue with Yolov5 and AutoShape:The issue goes away when I download pre-trained models off PyTorch Hub and convert it to TensorRT using the scripts provided by the ultralytics repo:
Expected behavior
CUDA should not throw error
Actual behavior
CUDA throws illegal memory access error
Steps to reproduce
tensorrt_yolo
to TensorRTtrtexec --onnx=yolov5l.onnx --saveEngine=yolov5l.engine
tensorrt_yolo
nodeVersions
Possible causes
I'm not a pro in TensorRT but here are a couple of potential causes:
.engine
and.onnx
must be generated at the same time using the given method mentioned in AutoShape Usage ultralytics/yolov5#7128:It would be great if someone can explain where the linked model comes from, and update it if necessary.
Also I'm not sure what I'm doing to the linked model is the right way to run inference. It would be great if more documentation could be linked on the model conversions.
Additional context
No response
The text was updated successfully, but these errors were encountered: