Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

run opted out decompositions #3646

Merged
merged 1 commit into from
Jun 16, 2022
Merged

Conversation

bdhirsh
Copy link
Collaborator

@bdhirsh bdhirsh commented Jun 13, 2022

with functionalization, there are a handful of "problematic" decompositions in core, where we have a functional operator that decomposes into view operators after the functionalization pass has already run.

I've enumerated them here, and added a helper function so you can run the decomposition by "re-functionalizing" it.

Waiting to land this PR until after I've landed pytorch/pytorch#79420

@bdhirsh
Copy link
Collaborator Author

bdhirsh commented Jun 13, 2022

cc @JackCaoG this is the "there are a handful of problematic decompositions" thing I mentioned a few weeks ago

@bdhirsh bdhirsh force-pushed the functionalization_decompositions branch from 8c7e2ec to 3ceb49c Compare June 13, 2022 19:49
bdhirsh added a commit to pytorch/pytorch that referenced this pull request Jun 13, 2022
…p; other functionalization fixes"

I moved out the changes to `FunctionalTensorWrapper.h` from the LTC <> functionalization PR into a separate PR here, so dealing with XLA failures will be a bit easier.

Specifically, the LTC PR will make a few operators like `pixel_shuffle` that are functional, but decompose into view ops, require re-functionalization once they hit the XLA backend. This PR exposes a helper utility to do that through `functionalize_aten_op`. This PR also contains the changes to:
- fix `detach()` for `FunctionalTensorWrapper`
- fix some undefined tensor handling cases


I have an XLA patch here to do the re-functionalizing: pytorch/xla#3646




[ghstack-poisoned]
bdhirsh added a commit to pytorch/pytorch that referenced this pull request Jun 13, 2022
…nalization fixes"

I moved out the changes to `FunctionalTensorWrapper.h` from the LTC <> functionalization PR into a separate PR here, so dealing with XLA failures will be a bit easier.

Specifically, the LTC PR will make a few operators like `pixel_shuffle` that are functional, but decompose into view ops, require re-functionalization once they hit the XLA backend. This PR exposes a helper utility to do that through `functionalize_aten_op`. This PR also contains the changes to:
- fix `detach()` for `FunctionalTensorWrapper`
- fix some undefined tensor handling cases


I have an XLA patch here to do the re-functionalizing: pytorch/xla#3646




[ghstack-poisoned]
Copy link
Collaborator

@JackCaoG JackCaoG left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Thanks!

bdhirsh added a commit to pytorch/pytorch that referenced this pull request Jun 13, 2022
…p; other functionalization fixes"

I moved out the changes to `FunctionalTensorWrapper.h` from the LTC <> functionalization PR into a separate PR here, so dealing with XLA failures will be a bit easier.

Specifically, the LTC PR will make a few operators like `pixel_shuffle` that are functional, but decompose into view ops, require re-functionalization once they hit the XLA backend. This PR exposes a helper utility to do that through `functionalize_aten_op`. This PR also contains the changes to:
- fix `detach()` for `FunctionalTensorWrapper`
- fix some undefined tensor handling cases


I have an XLA patch here to do the re-functionalizing: pytorch/xla#3646




[ghstack-poisoned]
bdhirsh added a commit to pytorch/pytorch that referenced this pull request Jun 13, 2022
…nalization fixes"

I moved out the changes to `FunctionalTensorWrapper.h` from the LTC <> functionalization PR into a separate PR here, so dealing with XLA failures will be a bit easier.

Specifically, the LTC PR will make a few operators like `pixel_shuffle` that are functional, but decompose into view ops, require re-functionalization once they hit the XLA backend. This PR exposes a helper utility to do that through `functionalize_aten_op`. This PR also contains the changes to:
- fix `detach()` for `FunctionalTensorWrapper`
- fix some undefined tensor handling cases


I have an XLA patch here to do the re-functionalizing: pytorch/xla#3646




[ghstack-poisoned]
@bdhirsh bdhirsh force-pushed the functionalization_decompositions branch from 3ceb49c to 939566d Compare June 13, 2022 21:18
bdhirsh added a commit to pytorch/pytorch that referenced this pull request Jun 14, 2022
…p; other functionalization fixes"

I moved out the changes to `FunctionalTensorWrapper.h` from the LTC <> functionalization PR into a separate PR here, so dealing with XLA failures will be a bit easier.

Specifically, the LTC PR will make a few operators like `pixel_shuffle` that are functional, but decompose into view ops, require re-functionalization once they hit the XLA backend. This PR exposes a helper utility to do that through `functionalize_aten_op`. This PR also contains the changes to:
- fix `detach()` for `FunctionalTensorWrapper`
- fix some undefined tensor handling cases


I have an XLA patch here to do the re-functionalizing: pytorch/xla#3646




[ghstack-poisoned]
bdhirsh added a commit to pytorch/pytorch that referenced this pull request Jun 14, 2022
…nalization fixes"

I moved out the changes to `FunctionalTensorWrapper.h` from the LTC <> functionalization PR into a separate PR here, so dealing with XLA failures will be a bit easier.

Specifically, the LTC PR will make a few operators like `pixel_shuffle` that are functional, but decompose into view ops, require re-functionalization once they hit the XLA backend. This PR exposes a helper utility to do that through `functionalize_aten_op`. This PR also contains the changes to:
- fix `detach()` for `FunctionalTensorWrapper`
- fix some undefined tensor handling cases


I have an XLA patch here to do the re-functionalizing: pytorch/xla#3646




[ghstack-poisoned]
bdhirsh added a commit to pytorch/pytorch that referenced this pull request Jun 15, 2022
…p; other functionalization fixes"

I moved out the changes to `FunctionalTensorWrapper.h` from the LTC <> functionalization PR into a separate PR here, so dealing with XLA failures will be a bit easier.

Specifically, the LTC PR will make a few operators like `pixel_shuffle` that are functional, but decompose into view ops, require re-functionalization once they hit the XLA backend. This PR exposes a helper utility to do that through `functionalize_aten_op`. This PR also contains the changes to:
- fix `detach()` for `FunctionalTensorWrapper`
- fix some undefined tensor handling cases


I have an XLA patch here to do the re-functionalizing: pytorch/xla#3646




[ghstack-poisoned]
bdhirsh added a commit to pytorch/pytorch that referenced this pull request Jun 15, 2022
…nalization fixes"

I moved out the changes to `FunctionalTensorWrapper.h` from the LTC <> functionalization PR into a separate PR here, so dealing with XLA failures will be a bit easier.

Specifically, the LTC PR will make a few operators like `pixel_shuffle` that are functional, but decompose into view ops, require re-functionalization once they hit the XLA backend. This PR exposes a helper utility to do that through `functionalize_aten_op`. This PR also contains the changes to:
- fix `detach()` for `FunctionalTensorWrapper`
- fix some undefined tensor handling cases


I have an XLA patch here to do the re-functionalizing: pytorch/xla#3646




[ghstack-poisoned]
bdhirsh added a commit to pytorch/pytorch that referenced this pull request Jun 15, 2022
…p; other functionalization fixes"

I moved out the changes to `FunctionalTensorWrapper.h` from the LTC <> functionalization PR into a separate PR here, so dealing with XLA failures will be a bit easier.

Specifically, the LTC PR will make a few operators like `pixel_shuffle` that are functional, but decompose into view ops, require re-functionalization once they hit the XLA backend. This PR exposes a helper utility to do that through `functionalize_aten_op`. This PR also contains the changes to:
- fix `detach()` for `FunctionalTensorWrapper`
- fix some undefined tensor handling cases


I have an XLA patch here to do the re-functionalizing: pytorch/xla#3646




[ghstack-poisoned]
bdhirsh added a commit to pytorch/pytorch that referenced this pull request Jun 15, 2022
…nalization fixes"

I moved out the changes to `FunctionalTensorWrapper.h` from the LTC <> functionalization PR into a separate PR here, so dealing with XLA failures will be a bit easier.

Specifically, the LTC PR will make a few operators like `pixel_shuffle` that are functional, but decompose into view ops, require re-functionalization once they hit the XLA backend. This PR exposes a helper utility to do that through `functionalize_aten_op`. This PR also contains the changes to:
- fix `detach()` for `FunctionalTensorWrapper`
- fix some undefined tensor handling cases


I have an XLA patch here to do the re-functionalizing: pytorch/xla#3646




[ghstack-poisoned]
@bdhirsh bdhirsh force-pushed the functionalization_decompositions branch 3 times, most recently from 552fab9 to 29af626 Compare June 16, 2022 02:05
@bdhirsh
Copy link
Collaborator Author

bdhirsh commented Jun 16, 2022

I realized that we can't actually have XLA use the "functionalize under the hood" helper yet, because pt/xla doesn't have view_copy operators yet. So the dependency ordering is really:

(1) this PR: manually register the ~9-10 decompositions from core, by just calling into the decompositions directly (e.g. at::native::slice_backward).

(2) I'll land my TS integration PR, (which would have broken xla if not for landing this pR first)

(3) pt/xla opts into the functionalization pass at some point later, which'll require updating the decomps in this PR to use the at::functionalization::functionalize_op helper that I mentioned

@bdhirsh bdhirsh force-pushed the functionalization_decompositions branch from 29af626 to c673d9e Compare June 16, 2022 13:39
@bdhirsh
Copy link
Collaborator Author

bdhirsh commented Jun 16, 2022

@pytorchbot merge

bdhirsh added a commit to pytorch/pytorch that referenced this pull request Jun 16, 2022
…p; other functionalization fixes"

I moved out the changes to `FunctionalTensorWrapper.h` from the LTC <> functionalization PR into a separate PR here, so dealing with XLA failures will be a bit easier.

Specifically, the LTC PR will make a few operators like `pixel_shuffle` that are functional, but decompose into view ops, require re-functionalization once they hit the XLA backend. This PR exposes a helper utility to do that through `functionalize_aten_op`. This PR also contains the changes to:
- fix `detach()` for `FunctionalTensorWrapper`
- fix some undefined tensor handling cases


I have an XLA patch here to do the re-functionalizing: pytorch/xla#3646




[ghstack-poisoned]
bdhirsh added a commit to pytorch/pytorch that referenced this pull request Jun 16, 2022
…nalization fixes"

I moved out the changes to `FunctionalTensorWrapper.h` from the LTC <> functionalization PR into a separate PR here, so dealing with XLA failures will be a bit easier.

Specifically, the LTC PR will make a few operators like `pixel_shuffle` that are functional, but decompose into view ops, require re-functionalization once they hit the XLA backend. This PR exposes a helper utility to do that through `functionalize_aten_op`. This PR also contains the changes to:
- fix `detach()` for `FunctionalTensorWrapper`
- fix some undefined tensor handling cases


I have an XLA patch here to do the re-functionalizing: pytorch/xla#3646




[ghstack-poisoned]
@JackCaoG
Copy link
Collaborator

@bdhirsh Does pytorch bot works for pytorch/xla too?

@JackCaoG JackCaoG merged commit 350daa5 into master Jun 16, 2022
@JackCaoG JackCaoG deleted the functionalization_decompositions branch June 16, 2022 19:18
bdhirsh added a commit to pytorch/pytorch that referenced this pull request Jun 17, 2022
…p; other functionalization fixes"

I moved out the changes to `FunctionalTensorWrapper.h` from the LTC <> functionalization PR into a separate PR here, so dealing with XLA failures will be a bit easier.

Specifically, the LTC PR will make a few operators like `pixel_shuffle` that are functional, but decompose into view ops, require re-functionalization once they hit the XLA backend. This PR exposes a helper utility to do that through `functionalize_aten_op`. This PR also contains the changes to:
- fix `detach()` for `FunctionalTensorWrapper`
- fix some undefined tensor handling cases


I have an XLA patch here to do the re-functionalizing: pytorch/xla#3646




[ghstack-poisoned]
bdhirsh added a commit to pytorch/pytorch that referenced this pull request Jun 17, 2022
…nalization fixes"

I moved out the changes to `FunctionalTensorWrapper.h` from the LTC <> functionalization PR into a separate PR here, so dealing with XLA failures will be a bit easier.

Specifically, the LTC PR will make a few operators like `pixel_shuffle` that are functional, but decompose into view ops, require re-functionalization once they hit the XLA backend. This PR exposes a helper utility to do that through `functionalize_aten_op`. This PR also contains the changes to:
- fix `detach()` for `FunctionalTensorWrapper`
- fix some undefined tensor handling cases


I have an XLA patch here to do the re-functionalizing: pytorch/xla#3646




[ghstack-poisoned]
bdhirsh added a commit to pytorch/pytorch that referenced this pull request Jun 17, 2022
…p; other functionalization fixes"

I moved out the changes to `FunctionalTensorWrapper.h` from the LTC <> functionalization PR into a separate PR here, so dealing with XLA failures will be a bit easier.

Specifically, the LTC PR will make a few operators like `pixel_shuffle` that are functional, but decompose into view ops, require re-functionalization once they hit the XLA backend. This PR exposes a helper utility to do that through `functionalize_aten_op`. This PR also contains the changes to:
- fix `detach()` for `FunctionalTensorWrapper`
- fix some undefined tensor handling cases


I have an XLA patch here to do the re-functionalizing: pytorch/xla#3646




[ghstack-poisoned]
bdhirsh added a commit to pytorch/pytorch that referenced this pull request Jun 17, 2022
…nalization fixes"

I moved out the changes to `FunctionalTensorWrapper.h` from the LTC <> functionalization PR into a separate PR here, so dealing with XLA failures will be a bit easier.

Specifically, the LTC PR will make a few operators like `pixel_shuffle` that are functional, but decompose into view ops, require re-functionalization once they hit the XLA backend. This PR exposes a helper utility to do that through `functionalize_aten_op`. This PR also contains the changes to:
- fix `detach()` for `FunctionalTensorWrapper`
- fix some undefined tensor handling cases


I have an XLA patch here to do the re-functionalizing: pytorch/xla#3646




[ghstack-poisoned]
bdhirsh added a commit to pytorch/pytorch that referenced this pull request Jun 21, 2022
…p; other functionalization fixes"

I moved out the changes to `FunctionalTensorWrapper.h` from the LTC <> functionalization PR into a separate PR here, so dealing with XLA failures will be a bit easier.

Specifically, the LTC PR will make a few operators like `pixel_shuffle` that are functional, but decompose into view ops, require re-functionalization once they hit the XLA backend. This PR exposes a helper utility to do that through `functionalize_aten_op`. This PR also contains the changes to:
- fix `detach()` for `FunctionalTensorWrapper`
- fix some undefined tensor handling cases


I have an XLA patch here to do the re-functionalizing: pytorch/xla#3646




[ghstack-poisoned]
bdhirsh added a commit to pytorch/pytorch that referenced this pull request Jun 21, 2022
…nalization fixes"

I moved out the changes to `FunctionalTensorWrapper.h` from the LTC <> functionalization PR into a separate PR here, so dealing with XLA failures will be a bit easier.

Specifically, the LTC PR will make a few operators like `pixel_shuffle` that are functional, but decompose into view ops, require re-functionalization once they hit the XLA backend. This PR exposes a helper utility to do that through `functionalize_aten_op`. This PR also contains the changes to:
- fix `detach()` for `FunctionalTensorWrapper`
- fix some undefined tensor handling cases


I have an XLA patch here to do the re-functionalizing: pytorch/xla#3646




[ghstack-poisoned]
pytorchmergebot added a commit to pytorch/pytorch that referenced this pull request Jun 22, 2022
…p; other functionalization fixes"

I moved out the changes to `FunctionalTensorWrapper.h` from the LTC <> functionalization PR into a separate PR here, so dealing with XLA failures will be a bit easier.

Specifically, the LTC PR will make a few operators like `pixel_shuffle` that are functional, but decompose into view ops, require re-functionalization once they hit the XLA backend. This PR exposes a helper utility to do that through `functionalize_aten_op`. This PR also contains the changes to:
- fix `detach()` for `FunctionalTensorWrapper`
- fix some undefined tensor handling cases


I have an XLA patch here to do the re-functionalizing: pytorch/xla#3646




[ghstack-poisoned]
pytorchmergebot added a commit to pytorch/pytorch that referenced this pull request Jun 22, 2022
…nalization fixes"

I moved out the changes to `FunctionalTensorWrapper.h` from the LTC <> functionalization PR into a separate PR here, so dealing with XLA failures will be a bit easier.

Specifically, the LTC PR will make a few operators like `pixel_shuffle` that are functional, but decompose into view ops, require re-functionalization once they hit the XLA backend. This PR exposes a helper utility to do that through `functionalize_aten_op`. This PR also contains the changes to:
- fix `detach()` for `FunctionalTensorWrapper`
- fix some undefined tensor handling cases


I have an XLA patch here to do the re-functionalizing: pytorch/xla#3646




[ghstack-poisoned]
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

2 participants