We read every piece of feedback, and take your input very seriously.
To see all available qualifiers, see our documentation.
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
aten
Function Schema:
torch.ops.aten.arange.start: ((), {})
torch.ops.aten.rsub.Scalar: ((torch.float32,), {})
torch.ops.aten._to_copy.default: ((torch.int32,), {})
torch.ops.aten.embedding.default: ((torch.float32, torch.int64), {})
torch.ops.aten.embedding.default: ((torch.float32, torch.int32), {})
torch.ops.aten.layer_norm.default: ((torch.float32, None, torch.float32, torch.float32), {})
torch.ops.aten.addmm.default: ((torch.float32, torch.float32, torch.float32), {})
torch.ops.aten._softmax.default: ((torch.float32,), {})
torch.ops.aten.where.self: ((torch.bool, torch.float32, torch.float32), {})
Original PyTorch API: torch.arange, torch.embedding, torch.layer_norm, torch.addmm, torch._softmax, torch.where
torch.arange
torch.embedding
torch.layer_norm
torch.addmm
torch._softmax
torch.where
Relevant TensorRT Documentation: IElementWiseLayer, IConstantLayer
Add support for the above function schemas as aten converters.
The text was updated successfully, but these errors were encountered:
Added #1790 , #1794, #1796, #1797, #1798, #1799 for the subtasks
Sorry, something went wrong.
#1724 (additional sub tasks)
torch_tensorrt.dynamo.compile
This issue has not seen activity for 90 days, Remove stale label or comment or this will be closed in 10 days
apbose
Successfully merging a pull request may close this issue.
aten.unsqueeze, aten.reshape, aten.permute, aten.transpose
Function Schema:
torch.ops.aten.arange.start: ((), {})
torch.ops.aten.rsub.Scalar: ((torch.float32,), {})
torch.ops.aten._to_copy.default: ((torch.int32,), {})
torch.ops.aten.embedding.default: ((torch.float32, torch.int64), {})
torch.ops.aten.embedding.default: ((torch.float32, torch.int32), {})
torch.ops.aten.layer_norm.default: ((torch.float32, None, torch.float32, torch.float32), {})
torch.ops.aten.addmm.default: ((torch.float32, torch.float32, torch.float32), {})
torch.ops.aten._softmax.default: ((torch.float32,), {})
torch.ops.aten.where.self: ((torch.bool, torch.float32, torch.float32), {})
Original PyTorch API:
torch.arange
,torch.embedding
,torch.layer_norm
,torch.addmm
,torch._softmax
,torch.where
Relevant TensorRT Documentation: IElementWiseLayer, IConstantLayer
Add support for the above function schemas as aten converters.
The text was updated successfully, but these errors were encountered: