Skip to content

This issue was moved to a discussion.

You can continue the conversation there. Go to discussion →

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

❓ [Question] How to Enable the Torch-TensorRT Partition Feature ? #876

Closed
huangxiao2008 opened this issue Feb 16, 2022 · 2 comments
Closed
Labels
question Further information is requested

Comments

@huangxiao2008
Copy link

❓ Question

Hello,

I want to use TensorRT to run VectorNet from https://github.com/xk-huang/yet-another-vectornet

However, when I try to convert torchscript using torchtrtc, it terminates by showing an unsupported op:torch_scatter::scatter_max

terminate called after throwing an instance of 'torch::jit::ErrorReport'
  what():
Unknown builtin op: torch_scatter::scatter_max.
Could not find any similar ops to torch_scatter::scatter_max. This op may not exist or may not be currently supported in TorchScript.
:
/Data0/Users/tom.hx/.local/lib/python3.6/site-packages/torch_scatter/scatter.py(72): scatter_max
/Data0/Users/tom.hx/.local/lib/python3.6/site-packages/torch_scatter/scatter.py(160): scatter
/Data0/Users/tom.hx/.local/lib/python3.6/site-packages/torch_geometric/nn/conv/message_passing.py(426): aggregate
/tmp/tom.hx_pyg/tmpjesxc50s.py(168): propagate
/tmp/tom.hx_pyg/tmpjesxc50s.py(188): forward
/Data0/Users/tom.hx/.local/lib/python3.6/site-packages/torch/nn/modules/module.py(1090): _slow_forward
/Data0/Users/tom.hx/.local/lib/python3.6/site-packages/torch/nn/modules/module.py(1102): _call_impl
/Data0/Users/tom.hx/work/ai-compiler/tvm/vectornet_test/modeling/subgraph.py(50): forward
/Data0/Users/tom.hx/.local/lib/python3.6/site-packages/torch/nn/modules/module.py(1090): _slow_forward
/Data0/Users/tom.hx/.local/lib/python3.6/site-packages/torch/nn/modules/module.py(1102): _call_impl
/Data0/Users/tom.hx/work/ai-compiler/tvm/vectornet_test/modeling/vectornet.py(52): forward
/Data0/Users/tom.hx/.local/lib/python3.6/site-packages/torch/nn/modules/module.py(1090): _slow_forward
/Data0/Users/tom.hx/.local/lib/python3.6/site-packages/torch/nn/modules/module.py(1102): _call_impl
/Data0/Users/tom.hx/.local/lib/python3.6/site-packages/torch/jit/_trace.py(965): trace_module
/Data0/Users/tom.hx/.local/lib/python3.6/site-packages/torch/jit/_trace.py(750): trace
profile.py(156): <module>
Serialized   File "code/__torch__/GraphLayerPropJittable_4074db.py", line 15
    src = torch.index_select(_0, -2, index)
    index0 = torch.select(edge_index, 0, 1)
    aggr_out, _1 = ops.torch_scatter.scatter_max(src, index0, -2, None, 225)
                   ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ <--- HERE
    return torch.cat([_0, aggr_out], 1)

Aborted

I have been noticed that Torch-TensorRT can fallback to native PyTorch when TensorRT does not support the model subgraphs.

The question is, why does not this function work, and how to enable it?

@huangxiao2008 huangxiao2008 added the question Further information is requested label Feb 16, 2022
@narendasan
Copy link
Collaborator

narendasan commented Feb 16, 2022

It is enabled by default. The reason why your compilation is failing is because you are using ops from a 3rd party (not torch) and these ops are not loaded by the torchtrtc program. So both PyTorch and Torch-TRT don't know about them when the model is deserialized. I would recommend trying with the python api with torch_scatter imported as well, as the easiest way to try this.

So something like:

import torch # imports standard PyTorch ops and APIs 
import torch_scatter # imports custom ops and registers with PyTorch
import torch_tensorrt

...

trt_model = torch_tensorrt.compile(my_model, ...) # by default `require_full_compilation = False` - i.e. partial compilation 

@huangxiao2008
Copy link
Author

It is enabled by default. The reason why your compilation is failing is because you are using ops from a 3rd party (not torch) and these ops are not loaded by the torchtrtc program. So both PyTorch and Torch-TRT don't know about them when the model is deserialized. I would recommend trying with the python api with torch_scatter imported as well, as the easiest way to try this.

So something like:

import torch # imports standard PyTorch ops and APIs 
import torch_scatter # imports custom ops and registers with PyTorch
import torch_tensorrt

...

trt_model = torch_tensorrt.compile(my_model, ...) # by default `require_full_compilation = False` - i.e. partial compilation 

Thanks very much

@pytorch pytorch locked and limited conversation to collaborators Feb 19, 2022
@narendasan narendasan converted this issue into discussion #886 Feb 19, 2022

This issue was moved to a discussion.

You can continue the conversation there. Go to discussion →

Labels
question Further information is requested
Projects
None yet
Development

No branches or pull requests

2 participants