-
Notifications
You must be signed in to change notification settings - Fork 1.5k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
export to ONNX with NMS #81
Comments
Why does an error occur when the onnx file generated by export.py is converted to trt_engine, and #79 (comment) is normal. |
What tensorrt's version you are using? |
tensorrt==7.2.1.6 |
Hello everyone! I would like to introduce my open-source project - TensoRT-YOLO, a tool for deploying YOLO Series (Support YOLOv9) with Efficient NMS in TensorRT. Key Features
PerfomancePerfomance Test using GPU RTX 2080Ti 22GB on AMD Ryzen 7 5700X 8-Core/ 128GB RAM. Model Performance Evaluation using TensorRT engine using TensoRT-YOLO. All models were deployed using FP16, BatchSize 4 and size 640. YOLOv9 SeriesThis includes the YOLOv9-C, YOLOv9-E, YOLOv9-C-Converted, YOLOv9-E-Converted, GELAN-C and GELAN-E.
YOLOv8 SeriesThis includes the YOLOv8n, YOLOv8s, YOLOv8m, YOLOv8l and YOLOv8x.
|
Does the export to ONNX work with the
NMS
module? And dynamicbatch size
. If so, how do I do it?As I understand it, the NMS module only works for TF models?
The text was updated successfully, but these errors were encountered: