-
Notifications
You must be signed in to change notification settings - Fork 1.1k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
CUDA Forward Compatibility on non supported HW #234
Comments
@Pipboyguy a couple things to test, is what version of CUDA do you have installed on your base os? Also, this may sound stupid but most bugs with CUDA often are, have you tried rebooting since the most recent CUDA installation? |
Here's some info about my host system:
|
Please see PR #235. Eliminates issue |
This only eliminates it for me so would be nice to get more testers |
Exactly what NVidia GPU are you having issues with? 12.1 seems to support my ancient GTX 1080Ti (Pascal architecture). I have a 980Ti (Maxwell) somewhere, but I'd have to plug it in and hope it still works. |
Running a RTX 4080. Outdated drivers perhaps? |
is working with my 3090Ti. My Ubuntu Docker host:
|
My Pop!_OS host:
|
Should be addressed in #258. |
shall we close this issue? |
Just got it on 4090 with last master: Commit:
|
Are you running in a virtualized environment and trying to access your NVidia GPU? If so, try and update your nvidia driver in your VM or Docker instance. |
What worked for me was upgrading my nvidia-driver on the host, then Cuda version 12.1 should work. Also try CUDA 11.7 if upgrading nvidia driver is pain. Very likely the issue in your case as well @d0rc |
That makes more sense. I can see that an older driver in the VM works fine with a newer driver on the host, but not vice versa. |
Prerequisites
Please answer the following questions for yourself before submitting an issue.
Expected Behavior
There's no tagged cuda image on ghcr so after buildting the
Dockerfile.cuda
image,docker run --gpus=all --rm -it -p 8000:8000 -v /home/***/models:/models -e MODEL=/models/GPT4-X-Alpasta-30b_q4_0.bin llama_cpp_server_cuda
Current Behavior
The text was updated successfully, but these errors were encountered: