Skip to content

Commit

Permalink
Unet: disable deterministic on training (#2204)
Browse files Browse the repository at this point in the history
Summary:
It seems that the backward of some upsample cuda kernels is not deterministic:
pytorch/pytorch#121324 (comment)

/cc ezyang

Pull Request resolved: #2204

Reviewed By: aaronenyeshi

Differential Revision: D55576547

Pulled By: xuzhao9

fbshipit-source-id: ac117dfccbbea2fc38f84f97269c917194560674
  • Loading branch information
bhack authored and facebook-github-bot committed Apr 2, 2024
1 parent fbb768d commit 1d2550c
Show file tree
Hide file tree
Showing 2 changed files with 2 additions and 2 deletions.
2 changes: 1 addition & 1 deletion torchbenchmark/models/pytorch_unet/__init__.py
Original file line number Diff line number Diff line change
Expand Up @@ -6,7 +6,6 @@
from torch import optim
from typing import Tuple

torch.backends.cudnn.deterministic = True
torch.backends.cudnn.benchmark = False

from .pytorch_unet.unet import UNet
Expand Down Expand Up @@ -89,6 +88,7 @@ def jit_callback(self):
self.model = torch.jit.script(self.model)

def eval(self) -> Tuple[torch.Tensor]:
torch.backends.cudnn.deterministic = True
self.model.eval()
with torch.no_grad():
with torch.cuda.amp.autocast(enabled=self.args.amp):
Expand Down
2 changes: 1 addition & 1 deletion torchbenchmark/models/pytorch_unet/metadata.yaml
Original file line number Diff line number Diff line change
Expand Up @@ -5,4 +5,4 @@ eval_benchmark: false
eval_deterministic: true
eval_nograd: true
train_benchmark: false
train_deterministic: true
train_deterministic: false

0 comments on commit 1d2550c

Please sign in to comment.