Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Set precision=16 when use_amp is passed as True #1145

Merged
merged 11 commits into from
Apr 6, 2020
Merged
Prev Previous commit
Next Next commit
add use_amp to deprecated API
rmrao authored and Borda committed Apr 6, 2020
commit 8cdfe2470cc2def4ec6818d2a27a21e5d040f756
1 change: 1 addition & 0 deletions CHANGELOG.md
Original file line number Diff line number Diff line change
@@ -82,6 +82,7 @@ The format is based on [Keep a Changelog](http://keepachangelog.com/en/1.0.0/).
- Fixed validation and training loops run the partial dataset ([#1192](https://github.com/PyTorchLightning/pytorch-lightning/pull/1192))
- Fixed running `on_validation_end` only on main process in DDP ([#1125](https://github.com/PyTorchLightning/pytorch-lightning/pull/1125))
- Fixes `use_amp` issue ([#1145](https://github.com/PyTorchLightning/pytorch-lightning/pull/1145))
Borda marked this conversation as resolved.
Show resolved Hide resolved
Borda marked this conversation as resolved.
Show resolved Hide resolved
- Fixes using deprecated `use_amp` attribute ([#1145](https://github.com/PyTorchLightning/pytorch-lightning/pull/1145))

## [0.7.1] - 2020-03-07

7 changes: 4 additions & 3 deletions pytorch_lightning/trainer/trainer.py
Original file line number Diff line number Diff line change
@@ -443,12 +443,13 @@ def __init__(
test_percent_check, overfit_pct)

# 16 bit mixed precision training using apex
self.amp_level = amp_level
self.precision = precision

if use_amp:
warnings.warn("`use_amp` has been deprecated in favor of `precision` since v0.7.0"
" and will be removed in v0.9.0", DeprecationWarning)
precision = 16
self.amp_level = amp_level
self.precision = precision
self.use_amp = use_amp

assert self.precision in (16, 32), 'only 32 or 16 bit precision supported'