Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[Frontend][TFLite] Fix fully_connected converter when batch size is not 1 #6038

Merged
merged 2 commits into from
Jul 14, 2020

Conversation

trevor-m
Copy link
Contributor

The fully_connected converter used the shapes from the TFLite model to reshape the data tensor. However, the TFLite model shapes do not reflect those provided by the data_shape parameter in from_tflite(). The TFLite model shape input_tensor.tensor.ShapeAsNumpy() will give a batch size of 1 because the TFLite model obviously doesn't know about the data_shape dict the user provided to the relay importer.

For this particular op, the reshape can always be set to (-1, n_units) without needing to calculate a batch size.

For inceptionv4, without this PR we would incorrectly compute a batch size of 1 adding this reshape from %515: (4, 1, 1, 1536) to %516: (1, 1536). This ultimately causes the model output to become (1, 1001) instead of (4, 1001)

  ..
  %515 = nn.avg_pool2d(%514, pool_size=[8, 8], padding=[0, 0, 0, 0], layout="NHWC") /* ty=Tensor[(4, 1, 1, 1536), float32] */;                                          
  %516 = reshape(%515, meta[relay.Constant][0] /* ty=Tensor[(2), int32] */ /* ty=Tensor[(2), int32] */, newshape=[1, 1536]) /* ty=Tensor[(1, 1536), float32] */;
  %517 = nn.dense(%516, %v_param_299, units=None) /* ty=Tensor[(1, 1001), float32] */;                                                                                  
  %518 = nn.bias_add(%517, %v_param_300) /* ty=Tensor[(1, 1001), float32] */;                        
  nn.softmax(%518, axis=1) /* ty=Tensor[(1, 1001), float32] */                                                                                                          
} 

Btw, should the reshape op have some validation to prevent reshapes where the numbers of elements don't match? (4, 1, 1, 1536) -> (1, 1536) shouldn't be valid.

@FrozenGene
Copy link
Member

I think we could add some check in bool ReshapeRel function

Copy link
Contributor

@anijain2305 anijain2305 left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

LGTM. We should follow up with a check in ReshapeRel as you and Zhao suggested.

@FrozenGene FrozenGene merged commit 99c52f3 into apache:master Jul 14, 2020
@FrozenGene
Copy link
Member

@trevor-m @anijain2305 Thank you. This pr is merged now.

trevor-m pushed a commit to trevor-m/tvm that referenced this pull request Jul 14, 2020
…ot 1 (apache#6038)

* Fix fully_connected when batched

* Remove unused variable
trevor-m pushed a commit to neo-ai/tvm that referenced this pull request Jul 14, 2020
…ot 1 (apache#6038)

* Fix fully_connected when batched

* Remove unused variable
trevor-m pushed a commit to trevor-m/tvm that referenced this pull request Jul 15, 2020
…ot 1 (apache#6038)

* Fix fully_connected when batched

* Remove unused variable
trevor-m pushed a commit to neo-ai/tvm that referenced this pull request Jul 16, 2020
…ot 1 (apache#6038)

* Fix fully_connected when batched

* Remove unused variable
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

3 participants