Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[NNVM] TF: Add Pack operation #1570

Merged
merged 3 commits into from
Aug 18, 2018
Merged

Conversation

sergei-mironov
Copy link
Contributor

This pull-request adds support for 'Pack' operation when exporting from Tensorflow, and also makes minor fixes in argmin/argmax test wiring.

@sergei-mironov
Copy link
Contributor Author

sergei-mironov commented Aug 8, 2018

Hi. Test on this PR failed with message

OpenBLAS : Program will terminate because you tried to start too many threads.

while doing mxnet checks, which seems unrelated to the subject.
Could we re-run it?

@sergei-mironov
Copy link
Contributor Author

@srkreddy1238 could you please take a look at this? What may be wrong with tests?

@srkreddy1238
Copy link
Contributor

@grwlf the CI error look unrelated to the changes.

Did you try CI retrigger ?

@sergei-mironov
Copy link
Contributor Author

I'm afraid I don't know how to re-trigger CI. Do I have permissions to do this?

Now, when the conflict appeared, I'll be able to resolve it and that will re-trigger CI automatically.

@srkreddy1238
Copy link
Contributor

Just do a dummy amend and push (change the gerrit id)

_test_pack(2, [4,2,4])
_test_pack(0, [2])
_test_pack(0, [0])
_test_pack(0, [])
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Can you add a test case for negative axis as tf.stack supports negative axis

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Done. Turns out that old implementation produced incorrect output shapes.

@sergei-mironov sergei-mironov force-pushed the nnvm-tf-pack branch 5 times, most recently from 7bea0d6 to 67e2309 Compare August 9, 2018 20:58
shape0 = attr['_input_shapes'][inputs[0]][0].copy()
shape = shape0.copy()
axis = int(attr["axis"])
if axis < 0:
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Can you avoid this check? Topi implementation of concatenate op has provision to handle negative axis.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The key point is that topi.concatenate's algorithm differs from one of tf.stack. While the first keeps the number of dimensions unchanged, the second adds extra dimension to the position specified by axis. To mitigate this difference the current implementation performs reshaping which needs positive axis in order to compute the inputs shape.

Actually, reshaping brings another problem: topi::reshape treats zero dimension specially while TF does not. That is why the check at line 345.

I suppose the correct thing to do may be to implement topi::stack as a separate operation, and do not use reshape at all here. What do you think?

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I advice not to relay on _input_shapes for transforms as there may be empty dimensions or -1 possible which might fail later.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I think it will be better if we can implement a operator using existing operators, unless there is such requirement.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@grwlf you may try using expand_dim instead of inserting '1' in _input_shapes.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Thank you, I'll try to use expand_dim

@sergei-mironov sergei-mironov force-pushed the nnvm-tf-pack branch 2 times, most recently from db2705b to 5f84b07 Compare August 10, 2018 09:21
@sergei-mironov sergei-mironov force-pushed the nnvm-tf-pack branch 3 times, most recently from eeb2c5e to bd59f15 Compare August 15, 2018 17:45
@sergei-mironov
Copy link
Contributor Author

@srkreddy1238 @PariksheetPinjari909 looks like expand_dims fits fine. Could you please review once again?

b = np.arange(np.prod(shape), dtype=np.float32).reshape(shape)

with tf.Graph().as_default():
tf_a = constant_op.constant(a, shape=shape, dtype='float32')
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Ref. #1546 Suggest to use only placeholders instead of constants.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Done

@sergei-mironov
Copy link
Contributor Author

@srkreddy1238 could you please check this one?

@tqchen tqchen merged commit 9b0e499 into apache:master Aug 18, 2018
@tqchen
Copy link
Member

tqchen commented Aug 18, 2018

Thanks @grwlf @srkreddy1238 @PariksheetPinjari909 , this is now merged

FrozenGene pushed a commit to FrozenGene/tvm that referenced this pull request Dec 27, 2018
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

Successfully merging this pull request may close these issues.

5 participants