Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Inference YOLOV9 add nms batched by TensorRT (Python backend) #79

Closed
thaitc-hust opened this issue Feb 26, 2024 · 4 comments
Closed

Inference YOLOV9 add nms batched by TensorRT (Python backend) #79

thaitc-hust opened this issue Feb 26, 2024 · 4 comments

Comments

@thaitc-hust
Copy link

give me star if my repo helpful for you!
link: https://github.com/thaitc-hust/yolov9-tensorrt

@WongKinYiu
Copy link
Owner

added to readme.

@WongKinYiu
Copy link
Owner

https://github.com/thaitc-hust/yolov9-tensorrt/blob/main/torch2onnx.py#L27

you may need use output = output[1] instead.

@laugh12321
Copy link

laugh12321 commented Mar 3, 2024

Hello everyone!

I would like to introduce my open-source project - TensoRT-YOLO, a tool for deploying YOLO Series (Support YOLOv9) with Efficient NMS in TensorRT.

Key Features

  • Supports FLOAT32, FLOAT16 ONNX export, and TensorRT inference
  • Supports YOLOv5, YOLOv8, YOLOv9, PP-YOLOE and PP-YOLOE+
  • Integrates EfficientNMS TensorRT plugin for accelerated post-processing
  • Utilizes CUDA kernel functions to accelerate preprocess
  • Supports C++ and Python inference

@zahidzqj
Copy link

https://github.com/thaitc-hust/yolov9-tensorrt/blob/main/torch2onnx.py#L27

you may need use output = output[1] instead.

I found some differences between the results of Pytorch inference and Tensorrt-NMS inference,but I don't know how it happened

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

4 participants