-
Notifications
You must be signed in to change notification settings - Fork 1.8k
model speedup problem using pytorch when multi-output passed across model #2756
Comments
Thanks for reply, I tried new version in TorchModuleGraph and applied unpack_manually() method before speed up model. |
Hi~ @LovPe Could you please show the code snippet of the connecting part of the two models?I'll build a similar example and see if we can handle this scenario. Thanks |
@zheng-ningxin The test code:
After running the code, you can get result like this: |
Hi, ~ @LovPe Currently, NNI cannot handle the multiple successive pack&unpack pairs. We will support it as soon as possible, Thanks for the feedback~ In addition, for other users who see this issue to understand the network structure more conveniently, I drew the network topology: |
@zheng-ningxin Thanks for quick reply~ |
@LovPe Sure~ Here is the visualization tool:
|
Hi @zheng-ningxin Thanks for quick reply. The change can be solved for this case, there still a issue for me 🤣
assertion trigerd: Maybe the code should like this(line 584-588 in _graph_utils.py): 🤔
|
Thanks for help. The new implementation is much clear and it works well for me~ |
It seems that speeding up (nni.compression.pytorch.speedup.ModelSpeedup) a model with list/tuple pack/unpack is still not supported up until now (current master). List/tuple unpack is not supported in
|
@tigert1998 we only supported |
|
@QuanluZhang @zheng-ningxin - ping to check the latest status of this fix. |
@scarlett2018 Same issue with this one, please refer it for more details. @xuezu29 I have fix this in the refactored speedup, please have a try when this pr merged, or you can clone the corresponding branch and compile the NNI manually. Please let me know if there is more scenarios to support. Thanks. |
Hi, thanks for amazing work
I use nni speedup on a model with 2 submodel, the first result passed to second like this:
modelA --> modelB,
modelA has 2 outputs and passed to modelB as input, like this:
modelA.output1 --> convOP --> modelB.output1
modelA.output2 --> convOP --> modelB.output2
The problem is that after building the graph using TorchModuleGraph in _graph_utils.py, The convOP Node input can not find
correctly in modelA since there exist a prim::TupleConstruct between modelA and modelB , TorchModuleGraph can not go over this op.
I wonder if there are any solution for this problem. wish for your reply. Thanks a lot
The text was updated successfully, but these errors were encountered: