-
Notifications
You must be signed in to change notification settings - Fork 354
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Collection Support [Inprogress] #802
Conversation
@narendasan @peri044, the first version of collection feature is ready, can you review the code? |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Took a high level pass. I think the logic seems mostly fine, lots of usability issues however re the messaging.
core/ir/ir.cpp
Outdated
InputSpecMap pair_input_vals_with_specs(std::vector<const torch::jit::Value*> vals, std::vector<Input> specs) { | ||
LOG_DEBUG("pair_input_vals_with_specs"); |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Could we have a better message here or just remove it if its not useful. Perhaps dump what the pairings are?
core/ir/ir.cpp
Outdated
@@ -27,19 +36,59 @@ InputSpecMap pair_input_vals_with_specs(std::vector<const torch::jit::Value*> va | |||
return a; | |||
} | |||
|
|||
CollectionInputSpecMap pair_input_vals_with_specs_collection(std::vector<const torch::jit::Value*> vals, std::vector<std::vector<Input>>& specs) { | |||
LOG_DEBUG("pair_input_vals_with_specs collection"); |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Same thing here
|
||
CollectionInputSpecMap a; | ||
for (size_t i = 0; i < vals.size(); i++) { | ||
LOG_DEBUG("Paring " << i << ": " << vals[i]->debugName() << " : " << specs[i]); |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
It would be good to make this slightly more formatted so it stands out in the log
core/ir/ir.cpp
Outdated
std::vector<const torch::jit::Value*> get_tensor_inputs( | ||
std::shared_ptr<torch::jit::Graph>& g, | ||
StaticParams& static_params) { | ||
std::vector<const torch::jit::Value*> input_tensors; | ||
auto inputs = g->inputs(); | ||
LOG_DEBUG("Inputs size " << inputs.size()); |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Not sure we need to keep this, or if we should this needs to be more discriptive
core/ir/ir.cpp
Outdated
for (auto in : inputs) { | ||
LOG_DEBUG("input debug name: " << in->debugName()); |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Same here
tests/cpp/test_collection.cpp
Outdated
|
||
TEST(CppAPITests, TestCollectionListInput) { | ||
|
||
std::string path = "/root/Torch-TensorRT/list_input.ts"; |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
And here
core/compiler.cpp
Outdated
// ir::TypeMap& first_use_type_map) { | ||
// Associate input specs with inputs | ||
// cfg.convert_info.inputs = std::move(ir::associate_specs_with_inputs(g, cfg.inputs, static_params)); | ||
cfg.convert_info.collection_inputs = std::move(ir::associate_specs_with_collection_inputs(g, cfg.graph_inputs, static_params)); |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Why do we call this twice with different apis?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This will get the input spec map(collection_inputs maybe confused with GraphInput.collection_input, it should be changed into collection_input_spec_map)
core/compiler.cpp
Outdated
cfg.convert_info.collection_inputs = std::move(ir::associate_specs_with_collection_inputs(g, cfg.graph_inputs, static_params)); | ||
|
||
auto collection_inputs = ir::get_collection_inputs(g, static_params); | ||
LOG_DEBUG("In MapInputsAndDetermineDTypes " << "g->inputs() size " << g->inputs().size() << ", collection_inputs size " << collection_inputs.size()); |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Better logging messages
core/compiler.cpp
Outdated
LOG_INFO( | ||
"Since input type is not explicitly defined, infering using first tensor calculation\n Found input " | ||
<< in->debugName() << " has type " << est_type_opt[i].value() | ||
<< ". If this is incorrect explicitly set dtype for input and file a bug"); |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I dont think we should tell them to file a bug here since we just use a heuristic, its not always going to be right.
core/compiler.cpp
Outdated
// If we can calculate the type from the graph and the type was not defined by the user then use the calculated | ||
// type | ||
LOG_INFO( | ||
"Since input type is not explicitly defined, infering using first tensor calculation\n Found input " |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Inferred instead of Found
@inocsin can you rebase quickly? |
@narendasan rebased to master |
@naren,I think we should remove |
# info.inputs = [i._to_internal() for i in inputs] | ||
info.graph_inputs.inputs = [i._to_internal() for i in inputs] | ||
else: | ||
info.graph_inputs.input_signature = _parse_collection_input(compile_spec["inputs"]) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Still cannot pass python value (torch.classes.tensorrt._Input()) to info.graph_inpus.input_signature (torch::jit::IValue)
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
@narendasan do you have any suggestions?
We need |
…sorrt::Input compatible with IValue. Support simple case of tuple input model. Add unit test. Signed-off-by: inocsin <vcheungyi@163.com>
… elements. Using two level vector to store ir::Input Signed-off-by: inocsin <vcheungyi@163.com>
Signed-off-by: inocsin <vcheungyi@163.com>
Signed-off-by: inocsin <vcheungyi@163.com>
Signed-off-by: inocsin <vcheungyi@163.com>
Signed-off-by: inocsin <vcheungyi@163.com>
Signed-off-by: inocsin <vcheungyi@163.com>
Signed-off-by: inocsin <vcheungyi@163.com>
Signed-off-by: inocsin <vcheungyi@163.com>
Signed-off-by: inocsin <vcheungyi@163.com>
…sionInfo.collection_input_spec_map Signed-off-by: inocsin <vcheungyi@163.com>
Signed-off-by: inocsin <vcheungyi@163.com>
Signed-off-by: inocsin <vcheungyi@163.com>
Signed-off-by: inocsin <vcheungyi@163.com>
Signed-off-by: inocsin <vcheungyi@163.com>
Signed-off-by: inocsin <vcheungyi@163.com>
Signed-off-by: inocsin <vcheungyi@163.com>
Signed-off-by: inocsin <vcheungyi@163.com>
…ually Signed-off-by: inocsin <vcheungyi@163.com>
Now can keep |
Signed-off-by: inocsin <vcheungyi@163.com>
…and all the nodes can be converted Signed-off-by: inocsin <vcheungyi@163.com>
# raise KeyError("Input specs should be either torch_tensorrt.Input or torch.Tensor, found types: {}".format( | ||
# [type(i) for i in compile_spec["inputs"]])) | ||
|
||
if isinstance(compile_spec["inputs"], list) and all([isinstance(i, torch.Tensor) or isinstance(i, Input) for i in compile_spec["inputs"]]): |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Can these be reduced into a common code path?
Signed-off-by: inocsin <vcheungyi@163.com>
Fixes #469 |
Hi @inocsin! Thank you for your pull request and welcome to our community. Action RequiredIn order to merge any pull request (code, docs, etc.), we require contributors to sign our Contributor License Agreement, and we don't seem to have one on file for you. ProcessIn order for us to review and merge your suggested changes, please sign at https://code.facebook.com/cla. If you are contributing on behalf of someone else (eg your employer), the individual CLA may not be sufficient and your employer may need to sign the corporate CLA. Once the CLA is signed, our tooling will perform checks and validations. Afterwards, the pull request will be tagged with If you have received this in error or have any questions, please contact us at cla@fb.com. Thanks! |
Thank you for signing our Contributor License Agreement. We can now accept your code for this (and any) Meta Open Source project. Thanks! |
Should fix #798 |
When this is done, it will probably make many (if not all) torchvision models much closer to automatic conversion to trt: |
Closing in favor of #1201 |
After I merge this changes, When I import torch_tensorrt raise errors: can you help me?thanks |
TODO list and current progress
prim::ListConstruct
without fallback them (currently we have to fallback those two operator to support list input and output)aten::__getitem__
without fallback themDescription
Fixes #629 #428
Type of change
Checklist: