Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[Frontend][Tensorflow]add batch_dim support for gatherV2 #7951

Merged
merged 5 commits into from
May 5, 2021

Conversation

zxy844288792
Copy link
Contributor

Encounter a special cases when batch_dims=1 in gather() from centernet_hourglass_512x512_1 from tensorflow hub model zoo.
Implement the batch_dims logic according to tensorflow implementation: https://www.tensorflow.org/api_docs/python/tf/gather
https://github.com/tensorflow/tensorflow/blob/5dcfc51118817f27fad5246812d83e5dccdc5f72/tensorflow/core/kernels/gather_op.cc

I have not added testcases for topi and relay since numpy does not have that attribute, I am open to see any suggestion for that. Test cases for tensorflow frontend parser have been added.

Thanks for contributing to TVM! Please refer to guideline https://tvm.apache.org/docs/contribute/ for useful information and tips. After the pull request is submitted, please request code reviews from Reviewers by @ them in the pull request thread.

@zxy844288792
Copy link
Contributor Author

@comaniac @icemelon9 @yongwww @kevinthesun

Copy link
Contributor

@comaniac comaniac left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Overall LGTM. Just nits.

include/tvm/topi/transform.h Outdated Show resolved Hide resolved
src/topi/transform.cc Show resolved Hide resolved
Copy link
Member

@yongwww yongwww left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

LGTM

@tkonolige
Copy link
Contributor

@zxy844288792 I notice you haven't update the gradient for take to accept batch_dims. Could you either update the gradient or add a check that batch_dims = 0 in the gradient?

@zxy844288792
Copy link
Contributor Author

@zxy844288792 I notice you haven't update the gradient for take to accept batch_dims. Could you either update the gradient or add a check that batch_dims = 0 in the gradient?

I just added a check for batch_dims == 0 in the gradient

@comaniac comaniac merged commit ae31a33 into apache:main May 5, 2021
@comaniac
Copy link
Contributor

comaniac commented May 5, 2021

Thanks @zxy844288792 @tkonolige @yongwww

trevor-m pushed a commit to trevor-m/tvm that referenced this pull request May 6, 2021
* add batch_dim support

* fix lint

* add check for num of arguments for topi.take

* fix gpu test cases

* add check for batch_dims in take_grad
trevor-m pushed a commit to trevor-m/tvm that referenced this pull request May 6, 2021
* add batch_dim support

* fix lint

* add check for num of arguments for topi.take

* fix gpu test cases

* add check for batch_dims in take_grad
trevor-m pushed a commit to trevor-m/tvm that referenced this pull request May 6, 2021
* add batch_dim support

* fix lint

* add check for num of arguments for topi.take

* fix gpu test cases

* add check for batch_dims in take_grad
trevor-m pushed a commit to neo-ai/tvm that referenced this pull request May 11, 2021
* add batch_dim support

* fix lint

* add check for num of arguments for topi.take

* fix gpu test cases

* add check for batch_dims in take_grad
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

4 participants