-
Notifications
You must be signed in to change notification settings - Fork 462
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
feat: Added support of GPU for predictors in PyTorch #808
Conversation
Codecov Report
@@ Coverage Diff @@
## main #808 +/- ##
=======================================
Coverage 96.01% 96.01%
=======================================
Files 131 131
Lines 4942 4944 +2
=======================================
+ Hits 4745 4747 +2
Misses 197 197
Flags with carried forward coverage won't be shown. Click here to find out more.
Continue to review full report at Codecov.
|
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Thanks for the fix
When using a pretrained model, using map_location="gpu" throws an error
|
@gganes3 you're loading params on GPU, into modules that are on CPU so this is expected when instantiating your model, I'd suggest to create it on CPU, load your params using |
This PR simply adds a dynamic device selection for predictor inference.
This snippet:
used to yield:
because the model is on cuda, while the inputs are never moved to GPU.
And now, the snippet executes perfectly (& much faster than on CPU)
Since the predictions returned by each postprocessor are in numpy (automatically moved to CPU), this doesn't break any previous behaviour :)
Any feedback is welcome!