-
Notifications
You must be signed in to change notification settings - Fork 1k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
gcc-8: _mm_loadu_si64 missing #99
Comments
Note: this now also affects llama.cpp (as of commit d40fded) - which used to compile using gcc-8.3.0 before... |
llama.cpp issue: ggerganov/llama.cpp#1120 |
I noticed that simply replacing the invocation of _mm_loadu_si64 by _mm_loadl_epi64 in ggml.c:439 allows the code to compile using gcc-8 again - and the quantized models seem to run just as well as before. This appears to be the only reference in the entire code base - perhaps you could add some #ifdef magic to make us folks with old compilers happy?... |
PR welcome - not sure what #ifdef to use |
The fix was suggested by @sw in the |
Great, if no #ifdef necessary, so much the better. |
The library does not compile using gcc<9 (e.g. on Debian 10) because of missing _mm_loadu_si64 intrinsic (https://gcc.gnu.org/bugzilla/show_bug.cgi?id=78782):
Not sure if this is a bug since Debian 10 is pretty old, but if there is an easy fix and no other reasons that require gcc-9, it may be worth staying backward-compatible. Otherwise, maybe fail more gracefully (and/or mention gcc-9 in the docs).
The text was updated successfully, but these errors were encountered: