-
Notifications
You must be signed in to change notification settings - Fork 5.8k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Added nanprod to ivy experimental API #21198
Conversation
# Conflicts: # ivy/functional/backends/numpy/experimental/statistical.py # ivy/functional/backends/paddle/experimental/statistical.py # ivy/functional/backends/tensorflow/experimental/statistical.py # ivy/functional/backends/torch/experimental/statistical.py
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Hi @BebehCodes, thank you for your contribution ! I've left a few comments could you try to address them ? Overall I think this is going in the right direction we just need to refine some corners. Let me know how this goes. Thanks !
-------- | ||
>>> a = ivy.Container(x=ivy.array([[1, 2], [3, ivy.nan]]),\ | ||
y=ivy.array([[ivy.nan, 1, 2], [1, 2, 3]]) | ||
>>> ivy.Container.static_moveaxis(a) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I assume you mean static_nanprod
, careful when copying examples from other functions
-------- | ||
>>> a = ivy.Container(x=ivy.array([[1, 2], [3, ivy.nan]]),\ | ||
y=ivy.array([[ivy.nan, 1, 2], [1, 2, 3]]) | ||
>>> ivy.Container.static_moveaxis(a) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Likewise. Also this should be a.nanprod(axis, etc...)
The product of array elements over a given axis treating | ||
Not a Numbers (NaNs) as ones | ||
|
||
Functional Examples |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Could you include more examples that have variations of the other arguments (axis
, keepdims
etc...) for comprehensiveness ?
): | ||
input_dtype, x, axis, castable_dtype = dtype_x_axis_castable | ||
x = x[0] | ||
# Added clipping because the min_value and max_value arguments |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This works but it would be good if the helper function could be properly used to set the necessary boundaries
x = x[0] | ||
# Added clipping because the min_value and max_value arguments | ||
# don't seem to clip the returned array. | ||
# Also, for really big/small numbers some backends return something like Xe300 |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
We shouldn't be avoiding these corner cases, instead the output should remain uniform across backends. If different backends return different outputs for the same input values then I suggest you take the majority output as your target and modify the implementation of the backends that don't provide this output such that they do.
and on_device == "cpu" | ||
) | ||
) | ||
# This assumption is made because for some reason when |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Likewise here, we don't want to just avoid testing for these cases. Again, I suggest you make all backend implementations converge to the majority output dtype
Hey @nassimberrada , sorry for the delayed response, I didn't have access to my workspace. I believe I took care of all your comments. Have a look and tell me what you think. Thank you! |
Hey @BebehCodes, thanks for making those changes. Looks okay I think, the only issue left to address is the paddle backend which seems to fail, you can get more details by going to Also, I noticed you modified the |
Hey @nassimberrada , thanks for your feedback. I think paddle is fixed now. Regarding Have a look and tell me what you think! |
Hey @BebehCodes, looks mostly good. Regarding the modified numpy function I'm afraid we can't pull this given numpy doesn't support |
# Conflicts: # ivy_tests/test_ivy/helpers/function_testing.py
Hey @nassimberrada , I've tried handling it by either doing casting when this occurs while testing, etc, but the thing is that the wrong casting that happens from |
Hey, sorry for the delayed feedback. I understand, but are we sure the errors are caused by data type discrepancies ? Some of the value mismatches seem to indicate there might be another issue at play, for e.g this one
Does the fix you added to |
Hey @nassimberrada , sorry for the confusion, I reverted the changes of The changes I made in the function helpers were in order to handle the error since I reverted Yes you are right about Generally, I haven't figured out a way to bypass the wrong casting that happens by paddle's |
Hey, thanks for clarifying. I think the same remark applies to your additions to the helper function, ideally we want to keep these exception handling cases contained within the function. With that said, have you tried making use of the |
…cks in __cummax due to error from lint
# Conflicts: # ivy/functional/backends/paddle/experimental/statistical.py
Hey @nassimberrada . Thanks a lot for your suggestion! I added the supported dtypes decorator and the tests are passing for all backends. Also, I added the strategy for the initial argument as well. |
Yeah, looks good to me now, great work!. I'll be merging then, thanks again for contributing! |
Close #21196