-
Notifications
You must be signed in to change notification settings - Fork 654
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
NDArray.set(NDArray index, Number value) failing with int64 index array on gpu #1773
Comments
@KexinFeng Can you help to take a look |
@demq For your case, have you tried setting with NDIndex?
This is in version 0.18.0, which is just released. |
I have pulled the latest snapshot for 0.18.0, and the example that you have created works fine. When I tries to implement the same thing in QATranslator::processOutput(TranslatorContext ctx, NDList list)
I get
If I use
all works fine. Does the NDArray.set(NDIndex ind , Number num) somehow cast num to double ? |
It's good to hear that the new feature of About the second question, the descrepency between the two ways of calling |
Calling
NDArray.set(NDArray index, Number value)
with an index being an int64 array on a gpu with PyTorch enginefails with:
The CUDA PyTorch implementation requires a boolean. The fix is to create index as an NDArray of type DataType.BOOLEAN:
boolean[] bad_tokens_mask = new boolean[tokenTypes.size()];
This behavior is not documented, and the type check /translation to a boolean index should be best done in the
NDArray.set()
method itself.The text was updated successfully, but these errors were encountered: