✨[Feature] Remove requirement for require_full_compilation=False
when using input_signature
#1602
Labels
feature request
New feature or request
Feature Context
Models which are fully supported in TRT, except for their input type being a collection should be able to be fully-compiled in Torch-TRT. Considering that Torch-executed list packing and list unpacking code is already being inserted (by necessity) even when models are fully supported, there should not be a need to disable full compilation when providing complex input types. Additionally, operators including
prim::ListUnpack
should not be added totorch_executed_ops
automatically upon usinginput_signature
, as they are currently, since evaluators for them exist.Desired Solution
The preferred solution is to remove the requirement for
require_full_compilation=False
when usinginput_signature
and to remove the requirement that collection-based operators be executed in fallback:TensorRT/py/torch_tensorrt/ts/_compile_spec.py
Lines 259 to 300 in 835abf0
This would require modification of the C++
core
code as well, to ensure that relaxing this requirement will not cause further issues with the existing compilation phases.Additional Context
A proof-of-concept for this feature already exists in PR #1599, which could be used as a template to enable full-compilation functionality for collection inputs as well. This would complete the plan for Collection IO as discussed in #629 (comment).
The text was updated successfully, but these errors were encountered: