-
-
Notifications
You must be signed in to change notification settings - Fork 985
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Drop the support for PyTorch<2.0 #3272
Changes from 1 commit
File filter
Filter by extension
Conversations
Jump to
Diff view
Diff view
There are no files selected for viewing
Original file line number | Diff line number | Diff line change |
---|---|---|
|
@@ -101,13 +101,12 @@ def tensors_default_to(host): | |
:param str host: Either "cuda" or "cpu". | ||
""" | ||
assert host in ("cpu", "cuda"), host | ||
old_module, name = torch.Tensor().type().rsplit(".", 1) | ||
new_module = "torch.cuda" if host == "cuda" else "torch" | ||
torch.set_default_tensor_type("{}.{}".format(new_module, name)) | ||
old_host = torch.Tensor().device | ||
torch.set_default_device(host) | ||
There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. nit: Could we move the There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. Do we need this context manager at all? How is it different from
There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. Oh it would be great to replace this custom context manager with There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. It looks like https://github.com/pytorch/pytorch/releases Introduced in this PR 9 month ago: There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. ok, well let's keep the polyfill until we drop support for torch==1.11. There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. To clarify: this also applies to There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. Hmm.. it looks like https://pytorch.org/docs/1.13/search.html?q=set_default&check_keywords=yes&area=default There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. Pyro has always aimed at being more stable than torch, and we have historically implemented polyfills in Pyro to smooth over PyTorch's move-fast-and-break-things attitude. If I had time, I'd implement polyfills like a The motivation for being very stable is to avoid giving people headaches. Every time we drop support for some version of some underlying library, some grad student wastes a day trying to install an old repo whose author has graduated and didn't pin versions. Every time we pin a library version, some software engineer at BigCo wastes a week solving dependency issues between conflicting libraries with non-overlapping version pins, spending a day committing to an upstream repo maintained by an overcommitted professor, building polyfills around another dependency (who doesn't explicitly pin versions but actually depends on a version outside our range, which it took haf a day to figure out), and replacing a third library that is no longer maintained. If you do decide to drop torch 1.11 support, could you update the version pins everywhere and update the python supported versions? And we'll bump the minor version in our next release. There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more.
I confirmed this by installing it locally:
Can you confirm that it's ok to drop the support for all torch 1.11, 1.12, and 1.13? There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. Sure, let's just be sure to announce in our release notes and bump the minor version. |
||
try: | ||
yield | ||
finally: | ||
torch.set_default_tensor_type("{}.{}".format(old_module, name)) | ||
torch.set_default_device(old_host) | ||
|
||
|
||
@contextlib.contextmanager | ||
|
Original file line number | Diff line number | Diff line change |
---|---|---|
|
@@ -856,10 +856,10 @@ def test_profile(backend, jit, n=1, num_steps=1, log_every=1): | |
args = parser.parse_args() | ||
|
||
torch.set_default_dtype(torch.double if args.double else torch.float) | ||
if args.double: | ||
torch.set_default_dtype(torch.float64) | ||
There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. delete these two new lines (they are unnecessary given line 858 above) |
||
if args.cuda: | ||
torch.set_default_tensor_type( | ||
torch.cuda.DoubleTensor if args.double else torch.cuda.FloatTensor | ||
) | ||
torch.set_default_device("cuda") | ||
|
||
if args.profile: | ||
p = cProfile.Profile() | ||
|
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Is the
set_default_dtype
needed here? I see it omitted in other changes.