-
Notifications
You must be signed in to change notification settings - Fork 505
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
run opted out decompositions #3646
Conversation
83d30ed
to
8c7e2ec
Compare
cc @JackCaoG this is the "there are a handful of problematic decompositions" thing I mentioned a few weeks ago |
8c7e2ec
to
3ceb49c
Compare
…p; other functionalization fixes" I moved out the changes to `FunctionalTensorWrapper.h` from the LTC <> functionalization PR into a separate PR here, so dealing with XLA failures will be a bit easier. Specifically, the LTC PR will make a few operators like `pixel_shuffle` that are functional, but decompose into view ops, require re-functionalization once they hit the XLA backend. This PR exposes a helper utility to do that through `functionalize_aten_op`. This PR also contains the changes to: - fix `detach()` for `FunctionalTensorWrapper` - fix some undefined tensor handling cases I have an XLA patch here to do the re-functionalizing: pytorch/xla#3646 [ghstack-poisoned]
…nalization fixes" I moved out the changes to `FunctionalTensorWrapper.h` from the LTC <> functionalization PR into a separate PR here, so dealing with XLA failures will be a bit easier. Specifically, the LTC PR will make a few operators like `pixel_shuffle` that are functional, but decompose into view ops, require re-functionalization once they hit the XLA backend. This PR exposes a helper utility to do that through `functionalize_aten_op`. This PR also contains the changes to: - fix `detach()` for `FunctionalTensorWrapper` - fix some undefined tensor handling cases I have an XLA patch here to do the re-functionalizing: pytorch/xla#3646 [ghstack-poisoned]
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Thanks!
…p; other functionalization fixes" I moved out the changes to `FunctionalTensorWrapper.h` from the LTC <> functionalization PR into a separate PR here, so dealing with XLA failures will be a bit easier. Specifically, the LTC PR will make a few operators like `pixel_shuffle` that are functional, but decompose into view ops, require re-functionalization once they hit the XLA backend. This PR exposes a helper utility to do that through `functionalize_aten_op`. This PR also contains the changes to: - fix `detach()` for `FunctionalTensorWrapper` - fix some undefined tensor handling cases I have an XLA patch here to do the re-functionalizing: pytorch/xla#3646 [ghstack-poisoned]
…nalization fixes" I moved out the changes to `FunctionalTensorWrapper.h` from the LTC <> functionalization PR into a separate PR here, so dealing with XLA failures will be a bit easier. Specifically, the LTC PR will make a few operators like `pixel_shuffle` that are functional, but decompose into view ops, require re-functionalization once they hit the XLA backend. This PR exposes a helper utility to do that through `functionalize_aten_op`. This PR also contains the changes to: - fix `detach()` for `FunctionalTensorWrapper` - fix some undefined tensor handling cases I have an XLA patch here to do the re-functionalizing: pytorch/xla#3646 [ghstack-poisoned]
3ceb49c
to
939566d
Compare
…p; other functionalization fixes" I moved out the changes to `FunctionalTensorWrapper.h` from the LTC <> functionalization PR into a separate PR here, so dealing with XLA failures will be a bit easier. Specifically, the LTC PR will make a few operators like `pixel_shuffle` that are functional, but decompose into view ops, require re-functionalization once they hit the XLA backend. This PR exposes a helper utility to do that through `functionalize_aten_op`. This PR also contains the changes to: - fix `detach()` for `FunctionalTensorWrapper` - fix some undefined tensor handling cases I have an XLA patch here to do the re-functionalizing: pytorch/xla#3646 [ghstack-poisoned]
…nalization fixes" I moved out the changes to `FunctionalTensorWrapper.h` from the LTC <> functionalization PR into a separate PR here, so dealing with XLA failures will be a bit easier. Specifically, the LTC PR will make a few operators like `pixel_shuffle` that are functional, but decompose into view ops, require re-functionalization once they hit the XLA backend. This PR exposes a helper utility to do that through `functionalize_aten_op`. This PR also contains the changes to: - fix `detach()` for `FunctionalTensorWrapper` - fix some undefined tensor handling cases I have an XLA patch here to do the re-functionalizing: pytorch/xla#3646 [ghstack-poisoned]
…p; other functionalization fixes" I moved out the changes to `FunctionalTensorWrapper.h` from the LTC <> functionalization PR into a separate PR here, so dealing with XLA failures will be a bit easier. Specifically, the LTC PR will make a few operators like `pixel_shuffle` that are functional, but decompose into view ops, require re-functionalization once they hit the XLA backend. This PR exposes a helper utility to do that through `functionalize_aten_op`. This PR also contains the changes to: - fix `detach()` for `FunctionalTensorWrapper` - fix some undefined tensor handling cases I have an XLA patch here to do the re-functionalizing: pytorch/xla#3646 [ghstack-poisoned]
…nalization fixes" I moved out the changes to `FunctionalTensorWrapper.h` from the LTC <> functionalization PR into a separate PR here, so dealing with XLA failures will be a bit easier. Specifically, the LTC PR will make a few operators like `pixel_shuffle` that are functional, but decompose into view ops, require re-functionalization once they hit the XLA backend. This PR exposes a helper utility to do that through `functionalize_aten_op`. This PR also contains the changes to: - fix `detach()` for `FunctionalTensorWrapper` - fix some undefined tensor handling cases I have an XLA patch here to do the re-functionalizing: pytorch/xla#3646 [ghstack-poisoned]
…p; other functionalization fixes" I moved out the changes to `FunctionalTensorWrapper.h` from the LTC <> functionalization PR into a separate PR here, so dealing with XLA failures will be a bit easier. Specifically, the LTC PR will make a few operators like `pixel_shuffle` that are functional, but decompose into view ops, require re-functionalization once they hit the XLA backend. This PR exposes a helper utility to do that through `functionalize_aten_op`. This PR also contains the changes to: - fix `detach()` for `FunctionalTensorWrapper` - fix some undefined tensor handling cases I have an XLA patch here to do the re-functionalizing: pytorch/xla#3646 [ghstack-poisoned]
…nalization fixes" I moved out the changes to `FunctionalTensorWrapper.h` from the LTC <> functionalization PR into a separate PR here, so dealing with XLA failures will be a bit easier. Specifically, the LTC PR will make a few operators like `pixel_shuffle` that are functional, but decompose into view ops, require re-functionalization once they hit the XLA backend. This PR exposes a helper utility to do that through `functionalize_aten_op`. This PR also contains the changes to: - fix `detach()` for `FunctionalTensorWrapper` - fix some undefined tensor handling cases I have an XLA patch here to do the re-functionalizing: pytorch/xla#3646 [ghstack-poisoned]
552fab9
to
29af626
Compare
I realized that we can't actually have XLA use the "functionalize under the hood" helper yet, because pt/xla doesn't have (1) this PR: manually register the ~9-10 decompositions from core, by just calling into the decompositions directly (e.g. (2) I'll land my TS integration PR, (which would have broken xla if not for landing this pR first) (3) pt/xla opts into the functionalization pass at some point later, which'll require updating the decomps in this PR to use the |
29af626
to
c673d9e
Compare
@pytorchbot merge |
…p; other functionalization fixes" I moved out the changes to `FunctionalTensorWrapper.h` from the LTC <> functionalization PR into a separate PR here, so dealing with XLA failures will be a bit easier. Specifically, the LTC PR will make a few operators like `pixel_shuffle` that are functional, but decompose into view ops, require re-functionalization once they hit the XLA backend. This PR exposes a helper utility to do that through `functionalize_aten_op`. This PR also contains the changes to: - fix `detach()` for `FunctionalTensorWrapper` - fix some undefined tensor handling cases I have an XLA patch here to do the re-functionalizing: pytorch/xla#3646 [ghstack-poisoned]
…nalization fixes" I moved out the changes to `FunctionalTensorWrapper.h` from the LTC <> functionalization PR into a separate PR here, so dealing with XLA failures will be a bit easier. Specifically, the LTC PR will make a few operators like `pixel_shuffle` that are functional, but decompose into view ops, require re-functionalization once they hit the XLA backend. This PR exposes a helper utility to do that through `functionalize_aten_op`. This PR also contains the changes to: - fix `detach()` for `FunctionalTensorWrapper` - fix some undefined tensor handling cases I have an XLA patch here to do the re-functionalizing: pytorch/xla#3646 [ghstack-poisoned]
@bdhirsh Does pytorch bot works for pytorch/xla too? |
…p; other functionalization fixes" I moved out the changes to `FunctionalTensorWrapper.h` from the LTC <> functionalization PR into a separate PR here, so dealing with XLA failures will be a bit easier. Specifically, the LTC PR will make a few operators like `pixel_shuffle` that are functional, but decompose into view ops, require re-functionalization once they hit the XLA backend. This PR exposes a helper utility to do that through `functionalize_aten_op`. This PR also contains the changes to: - fix `detach()` for `FunctionalTensorWrapper` - fix some undefined tensor handling cases I have an XLA patch here to do the re-functionalizing: pytorch/xla#3646 [ghstack-poisoned]
…nalization fixes" I moved out the changes to `FunctionalTensorWrapper.h` from the LTC <> functionalization PR into a separate PR here, so dealing with XLA failures will be a bit easier. Specifically, the LTC PR will make a few operators like `pixel_shuffle` that are functional, but decompose into view ops, require re-functionalization once they hit the XLA backend. This PR exposes a helper utility to do that through `functionalize_aten_op`. This PR also contains the changes to: - fix `detach()` for `FunctionalTensorWrapper` - fix some undefined tensor handling cases I have an XLA patch here to do the re-functionalizing: pytorch/xla#3646 [ghstack-poisoned]
…p; other functionalization fixes" I moved out the changes to `FunctionalTensorWrapper.h` from the LTC <> functionalization PR into a separate PR here, so dealing with XLA failures will be a bit easier. Specifically, the LTC PR will make a few operators like `pixel_shuffle` that are functional, but decompose into view ops, require re-functionalization once they hit the XLA backend. This PR exposes a helper utility to do that through `functionalize_aten_op`. This PR also contains the changes to: - fix `detach()` for `FunctionalTensorWrapper` - fix some undefined tensor handling cases I have an XLA patch here to do the re-functionalizing: pytorch/xla#3646 [ghstack-poisoned]
…nalization fixes" I moved out the changes to `FunctionalTensorWrapper.h` from the LTC <> functionalization PR into a separate PR here, so dealing with XLA failures will be a bit easier. Specifically, the LTC PR will make a few operators like `pixel_shuffle` that are functional, but decompose into view ops, require re-functionalization once they hit the XLA backend. This PR exposes a helper utility to do that through `functionalize_aten_op`. This PR also contains the changes to: - fix `detach()` for `FunctionalTensorWrapper` - fix some undefined tensor handling cases I have an XLA patch here to do the re-functionalizing: pytorch/xla#3646 [ghstack-poisoned]
…p; other functionalization fixes" I moved out the changes to `FunctionalTensorWrapper.h` from the LTC <> functionalization PR into a separate PR here, so dealing with XLA failures will be a bit easier. Specifically, the LTC PR will make a few operators like `pixel_shuffle` that are functional, but decompose into view ops, require re-functionalization once they hit the XLA backend. This PR exposes a helper utility to do that through `functionalize_aten_op`. This PR also contains the changes to: - fix `detach()` for `FunctionalTensorWrapper` - fix some undefined tensor handling cases I have an XLA patch here to do the re-functionalizing: pytorch/xla#3646 [ghstack-poisoned]
…nalization fixes" I moved out the changes to `FunctionalTensorWrapper.h` from the LTC <> functionalization PR into a separate PR here, so dealing with XLA failures will be a bit easier. Specifically, the LTC PR will make a few operators like `pixel_shuffle` that are functional, but decompose into view ops, require re-functionalization once they hit the XLA backend. This PR exposes a helper utility to do that through `functionalize_aten_op`. This PR also contains the changes to: - fix `detach()` for `FunctionalTensorWrapper` - fix some undefined tensor handling cases I have an XLA patch here to do the re-functionalizing: pytorch/xla#3646 [ghstack-poisoned]
…p; other functionalization fixes" I moved out the changes to `FunctionalTensorWrapper.h` from the LTC <> functionalization PR into a separate PR here, so dealing with XLA failures will be a bit easier. Specifically, the LTC PR will make a few operators like `pixel_shuffle` that are functional, but decompose into view ops, require re-functionalization once they hit the XLA backend. This PR exposes a helper utility to do that through `functionalize_aten_op`. This PR also contains the changes to: - fix `detach()` for `FunctionalTensorWrapper` - fix some undefined tensor handling cases I have an XLA patch here to do the re-functionalizing: pytorch/xla#3646 [ghstack-poisoned]
…nalization fixes" I moved out the changes to `FunctionalTensorWrapper.h` from the LTC <> functionalization PR into a separate PR here, so dealing with XLA failures will be a bit easier. Specifically, the LTC PR will make a few operators like `pixel_shuffle` that are functional, but decompose into view ops, require re-functionalization once they hit the XLA backend. This PR exposes a helper utility to do that through `functionalize_aten_op`. This PR also contains the changes to: - fix `detach()` for `FunctionalTensorWrapper` - fix some undefined tensor handling cases I have an XLA patch here to do the re-functionalizing: pytorch/xla#3646 [ghstack-poisoned]
with functionalization, there are a handful of "problematic" decompositions in core, where we have a functional operator that decomposes into view operators after the functionalization pass has already run.
I've enumerated them here, and added a helper function so you can run the decomposition by "re-functionalizing" it.
Waiting to land this PR until after I've landed pytorch/pytorch#79420