-
Notifications
You must be signed in to change notification settings - Fork 357
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
🐛 [Bug] Error while loading Torch-TensorRT model (torch.jit.load) #973
Comments
Any progress here? Same problem. |
Same problem here. Have someone any news? |
@bowang007 can you take a look? |
@handoku can you try this in your work environment?
|
@mjack3 I was using docker image ngc-pytorch 22.02, the tensorrt version should be |
Same problem |
Took a look and I'm thinking about if the issue comes from module fallback. Are you using module fallback in your models? @edric1261234 , @handoku , @mjack3 |
Yes, full_compiled TRTengine is good,only happened with some module fallback |
Bug Description
The model below is converted in a Torch-TensorRT model, the sub_function module is excluded from the conversion. While loading the module with torch.jit.load, this error is raised.
To Reproduce
Expected behavior
The Torch-TensorRT model should be load in model to be used.
Environment
conda
,pip
,libtorch
, source): pipThe text was updated successfully, but these errors were encountered: