-
Notifications
You must be signed in to change notification settings - Fork 616
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Negative Loss on gpu ALS model #367
Comments
Its looking like the GPU loss calculation might be buggy (See also #441 ) |
The GPU ALS model would sometimes return incorrect results with the `calculate_training_loss` parameter enabled. This happend when the number_of_users * number_of_items was bigger than 2**31 due to an overflow in the loss function calculation. Fix and tests that would have caught this bug Closes #367 Closes #441
There was a bug with the thanks for reporting - sorry about the lengthy delay in getting this resolved. |
The GPU ALS model would sometimes return incorrect results with the `calculate_training_loss` parameter enabled. This happend when the number_of_users * number_of_items was bigger than 2**31 due to an overflow in the loss function calculation. Fix and add tests that would have caught this bug Closes #367 Closes #441
Thanks. Will you release a new pip module version? |
@gallir - fix is in v0.7.0 |
Thank you very much. I had modified your build yml to use your latest version, it worked better than before https://github.com/gallir/implicit |
I'm getting a negative loss value when running ALS using GPU (loss = -.0346) regardless of varying all parameters. When running the same data/parameters on CPU, I'm getting a positive loss. I'm confused why loss could be negative.
It's a ~6500 x 1m csr matrix.
The text was updated successfully, but these errors were encountered: