-
Notifications
You must be signed in to change notification settings - Fork 3.5k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
[NNVM] TF: Add Pack operation #1570
Conversation
Hi. Test on this PR failed with message
while doing mxnet checks, which seems unrelated to the subject. |
@srkreddy1238 could you please take a look at this? What may be wrong with tests? |
@grwlf the CI error look unrelated to the changes. Did you try CI retrigger ? |
I'm afraid I don't know how to re-trigger CI. Do I have permissions to do this? Now, when the conflict appeared, I'll be able to resolve it and that will re-trigger CI automatically. |
Just do a dummy amend and push (change the gerrit id) |
_test_pack(2, [4,2,4]) | ||
_test_pack(0, [2]) | ||
_test_pack(0, [0]) | ||
_test_pack(0, []) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Can you add a test case for negative axis as tf.stack supports negative axis
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Done. Turns out that old implementation produced incorrect output shapes.
7bea0d6
to
67e2309
Compare
shape0 = attr['_input_shapes'][inputs[0]][0].copy() | ||
shape = shape0.copy() | ||
axis = int(attr["axis"]) | ||
if axis < 0: |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Can you avoid this check? Topi implementation of concatenate op has provision to handle negative axis.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
The key point is that topi.concatenate's algorithm differs from one of tf.stack. While the first keeps the number of dimensions unchanged, the second adds extra dimension to the position specified by axis. To mitigate this difference the current implementation performs reshaping which needs positive axis in order to compute the inputs shape.
Actually, reshaping brings another problem: topi::reshape treats zero dimension specially while TF does not. That is why the check at line 345.
I suppose the correct thing to do may be to implement topi::stack as a separate operation, and do not use reshape at all here. What do you think?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I advice not to relay on _input_shapes
for transforms as there may be empty dimensions or -1 possible which might fail later.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I think it will be better if we can implement a operator using existing operators, unless there is such requirement.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
@grwlf you may try using expand_dim instead of inserting '1' in _input_shapes
.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Thank you, I'll try to use expand_dim
db2705b
to
5f84b07
Compare
eeb2c5e
to
bd59f15
Compare
@srkreddy1238 @PariksheetPinjari909 looks like expand_dims fits fine. Could you please review once again? |
bd59f15
to
f0be890
Compare
b = np.arange(np.prod(shape), dtype=np.float32).reshape(shape) | ||
|
||
with tf.Graph().as_default(): | ||
tf_a = constant_op.constant(a, shape=shape, dtype='float32') |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Ref. #1546 Suggest to use only placeholders instead of constants.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Done
f0be890
to
ea1866a
Compare
@srkreddy1238 could you please check this one? |
Thanks @grwlf @srkreddy1238 @PariksheetPinjari909 , this is now merged |
This pull-request adds support for 'Pack' operation when exporting from Tensorflow, and also makes minor fixes in argmin/argmax test wiring.