-
Notifications
You must be signed in to change notification settings - Fork 355
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
fix: fix the parsing related model loading bug #1148
Conversation
Signed-off-by: Bo Wang <bowa@nvidia.com>
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Code conforms to C++ style guidelines
root cause explained here: https://github.com/pytorch/TensorRT/pull/1109/files#r903199440 |
Signed-off-by: Bo Wang <bowa@nvidia.com>
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Code conforms to C++ style guidelines
Decided to fix it after fallback, since things could become complex to rename the input in upstream code for if blocks |
Signed-off-by: Bo Wang bowa@nvidia.com
Description
When there is fallback, the converted model will contain input names that inherited from loweredGraph, such as x.1.
These names cannot be parsed by torch.jit.load().
This induces some errors like: #973 , #1112 .
We stop using the names from loweredGraph, we use the input names that are similar to what we are using when there is no fallback: input_0, input_1, ..
Fixes #973, #1112
Type of change
Checklist: