-
Notifications
You must be signed in to change notification settings - Fork 280
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Unet: disable deterministic on training #2204
Changes from all commits
File filter
Filter by extension
Conversations
Jump to
Diff view
Diff view
There are no files selected for viewing
Original file line number | Diff line number | Diff line change |
---|---|---|
|
@@ -6,7 +6,6 @@ | |
from torch import optim | ||
from typing import Tuple | ||
|
||
torch.backends.cudnn.deterministic = True | ||
torch.backends.cudnn.benchmark = False | ||
|
||
from .pytorch_unet.unet import UNet | ||
|
@@ -89,6 +88,7 @@ def jit_callback(self): | |
self.model = torch.jit.script(self.model) | ||
|
||
def eval(self) -> Tuple[torch.Tensor]: | ||
torch.backends.cudnn.deterministic = True | ||
There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. This function is not being used by the downstream There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. The problem is that more in general pytorch repo benchmark is not considering deterministic meta filed in this repository so it is hard to understand what to do there. There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. What about we remove line 91 |
||
self.model.eval() | ||
with torch.no_grad(): | ||
with torch.cuda.amp.autocast(enabled=self.args.amp): | ||
|
Original file line number | Diff line number | Diff line change |
---|---|---|
|
@@ -5,4 +5,4 @@ eval_benchmark: false | |
eval_deterministic: true | ||
eval_nograd: true | ||
train_benchmark: false | ||
train_deterministic: true | ||
train_deterministic: false | ||
There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. Why all the removal you are proposing what it will be the scope of this metadata? There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. @bhack The metadata would be default value of pytorch, which is There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more.
There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. Is it on this repo too? There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more.
The version in There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. But who is going to define if two unet eager runs need to be the same or not? There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. It seems that it could be deterministic when compiled if I have interpreted this correctly: What do you think? There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more.
Sorry about the confusion. Yes, it is defined by There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. So as it seems to be deterministic only compiled, at least in training, what do you want to do here and in the pytorch repo? There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. In our previous tests it is deterministic in the eager mode. If there is a PR changes this behavior and makes it only deterministic when compiled and non-deterministic in eager mode, it is up to the PyTorch team to decide if they should accept it. If the PyTorch Dev Infra team accepts it, we can skip the eager-mode deterministic test for this model. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I think we should also remove this line, to keep a consistent behavior with upstream.