-
Notifications
You must be signed in to change notification settings - Fork 23.3k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
[Intel GPU] Add NestedTensorXPU to parseDispatchKey and codegen #140461
Conversation
🔗 Helpful Links🧪 See artifacts and rendered test results at hud.pytorch.org/pr/140461
Note: Links to docs will display an error until the docs builds have been completed. ❗ 1 Active SEVsThere are 1 currently active SEVs. If your PR is affected, please view them below: ✅ No FailuresAs of commit c894cb3 with merge base 9a051f6 ( This comment was automatically generated by Dr. CI and updates every 15 minutes. |
Attention! native_functions.yaml was changedIf you are adding a new function or defaulted argument to native_functions.yaml, you cannot use it from pre-existing Python frontend code until our FC window passes (two weeks). Split your PR into two PRs, one which adds the new C++ functionality, and one that makes use of it from Python, and land them two weeks apart. See https://github.com/pytorch/pytorch/wiki/PyTorch's-Python-Frontend-Backward-and-Forward-Compatibility-Policy#forwards-compatibility-fc for more info. Caused by: |
@min-jean-cho , pls. help evaluate the failed cases. |
It is a known issue and we will fix it - xpu / linux-jammy-xpu-py3.9 / test (default, 2, 4, linux.idc.xpu) (gh) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
We only need definition of NestedTensorXPU
in PyTorch in-tree at the current stage. As a part of operator implementation, NestedTensorXPU
is only used in torch-xpu-ops
native_functions.yaml
to declare a dispatch for XPU.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Thanks, updated.
Could we refine the title. |
Thanks @guangyey for feedback, updated. |
@pytorchbot merge -r |
@pytorchbot started a rebase job onto refs/remotes/origin/viable/strict. Check the current status here |
Successfully rebased |
0a672a0
to
c894cb3
Compare
Merge startedYour change will be merged once all checks pass (ETA 0-4 Hours). Learn more about merging in the wiki. Questions? Feedback? Please reach out to the PyTorch DevX Team |
…rch#140461) Add `NestedTensorXPU` dispatch key. ``` >>> nt = torch.nested.nested_tensor([]).to("xpu") >>> nt nested_tensor([ ], device='xpu:0') >>> nt.is_xpu True ``` Pull Request resolved: pytorch#140461 Approved by: https://github.com/guangyey, https://github.com/EikanWang, https://github.com/ezyang
…rch#140461) Add `NestedTensorXPU` dispatch key. ``` >>> nt = torch.nested.nested_tensor([]).to("xpu") >>> nt nested_tensor([ ], device='xpu:0') >>> nt.is_xpu True ``` Pull Request resolved: pytorch#140461 Approved by: https://github.com/guangyey, https://github.com/EikanWang, https://github.com/ezyang
…rch#140461) Add `NestedTensorXPU` dispatch key. ``` >>> nt = torch.nested.nested_tensor([]).to("xpu") >>> nt nested_tensor([ ], device='xpu:0') >>> nt.is_xpu True ``` Pull Request resolved: pytorch#140461 Approved by: https://github.com/guangyey, https://github.com/EikanWang, https://github.com/ezyang
Part of intel/torch-xpu-ops#1141.
Add
NestedTensorXPU
dispatch key.