-
-
Notifications
You must be signed in to change notification settings - Fork 14.5k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
llama-cpp: fix cuda support #277709
llama-cpp: fix cuda support #277709
Conversation
e74b1b4
to
e41c63d
Compare
@SomeoneSerge thank you for the detailed feedback, I think I integrated everything you pointed out. |
a8eea68
to
02c283f
Compare
Result of |
I've switched a system with cudaSupport to this PR and tested ollama works |
02c283f
to
47fc482
Compare
@SomeoneSerge gave it another round of changes. Let me know. |
if all the settings are exclusive, one solution could be to replace the booleans with a string |
I'm not sure what exactly is the situation upstream wrt the interaction of these options, so in a way the bool flags might be justified. They let the end-user try the cursed things out, and we do communicate the supported variants using |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
We should wait for Ofborg, but otherwise I think the PR is ready (within the scope suggested by the title)
Thanks @happysalada!
Result of |
I've tested ollama on a system with cudaSupport with this PR. |
Description of changes
this fixes cuda support.
an attempt at addressing #272569
Things done
nix.conf
? (See Nix manual)sandbox = relaxed
sandbox = true
nix-shell -p nixpkgs-review --run "nixpkgs-review rev HEAD"
. Note: all changes have to be committed, also see nixpkgs-review usage./result/bin/
)Add a 👍 reaction to pull requests you find important.