-
Notifications
You must be signed in to change notification settings - Fork 545
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
F.conv onnx export better support #656
Comments
I guess this issue is too decent that no one really got this step? @kevinch-nv Please have a look, pytorch code is included. |
@mk-nvidia Pls have a look @KellenSunderland @kevinch-nv Pls have alook |
@kevinch-nv @KellenSunderland Daily address |
TRT requires the weights for the conv to be initializers, unless it is being overwritten by an INT8 -> Float dequantize layer in QDQ networks. General support of tensor conv weights are unsupported at the moment. |
Closing as duplicate of #609 |
Pls test this simple model export:
This is a dead simple model, but Pytorch can not export it make it convertable to trt.
When I convert to trt, it got:
I don't sure is pytorch side problem or onnx-tensorrt side problem, But I can not convert any model which contains self-defined F.conv op for example, SOLOv2.
Please help me if anyone knows how to solve it.
The text was updated successfully, but these errors were encountered: