-
-
Notifications
You must be signed in to change notification settings - Fork 4.4k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Check what's the reason to use double-precision in topic models #1576
Comments
FWIW, the Word2Vec/Doc2Vec classes use |
Started investigation. Test classes that use
Went on to Basically, numpy uses
So, as a result, everything just defaults to |
Good investigation, thanks @xelez. |
@menshikh-iv Are there any plans on refactoring? If yes, I think this should be done with refactoring. Well, I think I can do it for |
@xelez It will be very nice if you make PR and start work with it:+1: |
I believe the default should be |
Our TMs return vectors with double-precision
float64
, it looks like very suspicious, becausefloat32
is enough for all. Need to check, what's a reason of this behavior and what's a concrete method.The first step - look at this line in the test, after it - collect all TMs, that depends on this tests and check, where and why
float64
happened.Result - detailed description (where and why), and fixing this behavior after discussion (if needed)
The text was updated successfully, but these errors were encountered: