-
Notifications
You must be signed in to change notification settings - Fork 3.4k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Testing in dp mode uses only one of the GPUs #1213
Comments
ummm yeah, that's a bug. it should run via dp. @Ir1d want to submit a PR? |
@williamFalcon I tried wrapping the model in |
you wouldn’t wrap it yourself ever haha. the trainer needs to be modified to run the test on the correct method when done this way |
I was trying to wrap it in |
Anyway, we've find one possible workround here: |
evaluate is private... you're not meant to call it directly. call .test() |
lightning does the wrapping by itself... the fact that this doesn't work, is a bug. model = MyLightningModule.load_from_checkpoint(...)
trainer = Trainer(
gpus="-1",
distributed_backend='dp',
)
trainer.test(model) The bug needs to be addressed correctly. It's weird because we have tests for this... double check that this is really not working for you. |
so let's rename it starting with _ to be clear that it is private from it name |
I was calling .test and its not working |
@neggert may you have look at this multi GPU issue? |
@neggert ping :) |
looking at this with next sprint |
fixed! (0.8.5) |
🐛 Bug
To Reproduce
Steps to reproduce the behavior:
Run a test without training
Code sample
Modified from the conference-seed repo
Expected behavior
Environment
conda
,pip
, source): pipAdditional context
The text was updated successfully, but these errors were encountered: