We read every piece of feedback, and take your input very seriously.
To see all available qualifiers, see our documentation.
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Support NestedTensor for xpu device
NestedTensor
xpu
[Intel GPU] Add NestedTensorXPU to parseDispatchKey and codegen pytorch/pytorch#140461
Add RegisterNestedTensorXPU codegen #1140
Add NestedTensorXPU dispatch for device-agnostic NestedTensor Ops #1142
[Intel GPU] Add TORCH_API macro to export symbol NestedTensor_to_mask for libtorch_xpu pytorch/pytorch#145467
Add NestedTensor XPU ops
_nested_from_padded
_nested_tensor_softmax_with_shape
Add NestedTensor XPU Uts
Example
>>> nt = torch.nested.nested_tensor([]).to("xpu") >>> nt nested_tensor([ ], device='xpu:0') >>> nt.is_xpu True
No response
The text was updated successfully, but these errors were encountered:
[Intel GPU] Add TORCH_API macro to export symbol NestedTensor_to_mask…
7c314bf
… for libtorch_xpu (#145467) Part of intel/torch-xpu-ops#1141. The `TORCH_API` macro is added to export the symbol `NestedTensor_to_mask`, which is needed by libtroch_xpu for `NestedTensor_softmax_dropout_xpu`. Pull Request resolved: #145467 Approved by: https://github.com/guangyey, https://github.com/ezyang
Add aten::_nested_tensor_softmax_with_shape (#1323)
b6786e3
Part of #1141. Depends on pytorch/pytorch#145467. - `_nested_tensor_softmax_with_shape`
daisyden
fengyuan14
xytintel
min-jean-cho
No branches or pull requests
🚀 The feature, motivation and pitch
Support
NestedTensor
forxpu
device[Intel GPU] Add NestedTensorXPU to parseDispatchKey and codegen pytorch/pytorch#140461
Add RegisterNestedTensorXPU codegen #1140
Add NestedTensorXPU dispatch for device-agnostic NestedTensor Ops #1142
[Intel GPU] Add TORCH_API macro to export symbol NestedTensor_to_mask for libtorch_xpu pytorch/pytorch#145467
Add NestedTensor XPU ops
_nested_from_padded
Add basic nested tensor operators #1045_nested_tensor_softmax_with_shape
Add aten::_nested_tensor_softmax_with_shape #1323Add NestedTensor XPU Uts
Example
Alternatives
No response
Additional context
No response
The text was updated successfully, but these errors were encountered: