-
Notifications
You must be signed in to change notification settings - Fork 1.5k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
YOLOv9 with End2End ( Efficient NMS) #130
Comments
Added to readme. |
https://github.com/levipereira/yolov9/blob/main/models/experimental.py#L140 Here may has bug. |
Thanks. Fixed the issue was set output[1] as the prediction for the main branch instead of output[0]. |
Hello everyone! I would like to introduce my open-source project - TensoRT-YOLO, a tool for deploying YOLO Series with Efficient NMS in TensorRT. Key Features
Perfomance Test using GPU RTX 2080Ti 2GB on AMD Ryzen 7 5700X 8-Core/ 128GB RAMAll models are converted to ONNX models with the EfficientNMS plugin. The conversion was done using the TensoRT-YOLO tool, with the Model Export and Performance TestingUse the following commands to export the model and perform performance testing with trtyolo export -v yolov9 -w yolov9-converted.pt --imgsz 640 -o ./
trtexec --onnx=yolov9-converted.onnx --saveEngine=yolov9-converted.engine --fp16
trtexec --fp16 --avgRuns=1000 --useSpinWait --loadEngine=yolov9-converted.engine Performance testing was conducted using the TensorRT-YOLO inference on the coco128 dataset. YOLOv9 Series
YOLOv8 Series
|
Hey @levipereira, first of all thanks for this work! The warning in the ONNX export step results in a run-time error when trying to do the inference.
Any idea what could cause this? I could provide more code/context if needed. |
Inference with an end-to-end model using YOLOv9 source code is not supported due to lack of implementation. |
Hi, do we have to reparametrize the finetuned pt file before exporting to onnx format? Because when i perform the reparametrize python code it throws error like "AttributeError: 'DetectionModel' object has no attribute 'nc'" . |
@berkgungor Hi, You can try TensoRT-YOLO, which also supports exporting onnx with Efficient NMS and does not require reparameterizing the finetuned pt. |
I exported my yolov9 model to onnx using your end2end class, but when I try to load it for inference as: `import onnxruntime as ort onnx_model = "./best-end2end.onnx" returns me this error: this are my versions: |
@mdciri @radandreicristian The primary purpose of employing End2End is to utilize ONNX models on TensorRT. If you choose not to use TensorRT, you should proceed with the standard ONNX export process. Use case: https://github.com/levipereira/triton-server-yolo/tree/master |
When you encounter the "AttributeError: 'DetectionModel' object has no attribute 'nc'" error, you can manually change "model.nc" to "your num of classes"(in my case,model.nc=4).And also in "glan-c.yaml" line 4: change "nc: 80" to "nc: your nc",line 79: change |
Hi all, Here's the dynamic batch version for yolov9 inference in tensorrt in C++ using the @levipereira work for dynamic support. Any batchsize with any image size is supported here. Reference code for batching data is also present. |
@gl94 i already changed class numbers in yaml, nothing has changed. Same error. @levipereira i exported the model to onnx just using --include onnx without specifying end2end and then converted to tensorRT engine. Works fine. I also did not reparametrize since it threw error. |
@levipereira yes, for your implementation the End2End is to use the ONNX model on TensorRT. Anyway, I would like to convert my model to ONNX (with NMS) that works on ONNXRUNTIME. At the moment, the export to ONNX does not take into consideration the NMS. |
You might should also make the same change in the reparametrize python code. From "model.nc = ckpt['model'].nc" to "model.nc = your nc" |
… repositories - Removed YOLOv6, YOLOv7, and YOLOv9 export options from the CLI due to unsupported status. - Added guidance for users to export ONNX models using the EfficientNMS_TRT plugin by referring to the official repositories: - For YOLOv6: https://github.com/meituan/YOLOv6/tree/main/deploy/ONNX#tensorrt-backend-tensorrt-version-800 - For YOLOv7: https://github.com/WongKinYiu/yolov7#export - For YOLOv9: WongKinYiu/yolov9#130 (comment) - Ensured that users are directed to the most up-to-date and supported methods for exporting their models.
… repositories - Removed YOLOv6, YOLOv7, and YOLOv9 export options from the CLI due to unsupported status. - Added guidance for users to export ONNX models using the EfficientNMS_TRT plugin by referring to the official repositories: - For YOLOv6: https://github.com/meituan/YOLOv6/tree/main/deploy/ONNX#tensorrt-backend-tensorrt-version-800 - For YOLOv7: https://github.com/WongKinYiu/yolov7#export - For YOLOv9: WongKinYiu/yolov9#130 (comment) - Ensured that users are directed to the most up-to-date and supported methods for exporting their models.
… repositories - Removed YOLOv6, YOLOv7, and YOLOv9 export options from the CLI due to unsupported status. - Added guidance for users to export ONNX models using the EfficientNMS_TRT plugin by referring to the official repositories: - For YOLOv6: https://github.com/meituan/YOLOv6/tree/main/deploy/ONNX#tensorrt-backend-tensorrt-version-800 - For YOLOv7: https://github.com/WongKinYiu/yolov7#export - For YOLOv9: WongKinYiu/yolov9#130 (comment) - Ensured that users are directed to the most up-to-date and supported methods for exporting their models.
Thank you for your wonderful work!
YOLOv9 with End2End ( Efficient NMS)
Note: The primary purpose of employing End2End is to utilize ONNX models on TensorRT. If you choose not to use TensorRT, you should proceed with the standard ONNX export process.
I've created a forked repository from the original, adding End-to-End support for ONNX export. The changes can be found in export.py and models/experimental.py. Both files remain fully compatible with all current export operations.
Check it out at https://github.com/levipereira/yolov9
Support for End-to-End ONNX Export: Added support for end-to-end ONNX export in
export.py
andmodels/experimental.py
.Model Compatibility: This functionality currently works with all
DetectionModel
models ;Configuration Variables: Use the following flags to configure the model:
--include onnx_end2end
: Enabled export End2End--simplify
: ONNX/ONNX END2END: Simplify model.--topk-all
: ONNX END2END/TF.js NMS: Top-k for all classes to keep (default: 100).--iou-thres
: ONNX END2END/TF.js NMS: IoU threshold (default: 0.45).--conf-thres
: ONNX END2END/TF.js NMS: Confidence threshold (default: 0.25).Example:
The text was updated successfully, but these errors were encountered: