-
-
Notifications
You must be signed in to change notification settings - Fork 10
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Hard crash in ggml_backend_dev_count
#75
Comments
I figured that the sane behavior here would be to fall back to CPU, but now I'm not as sure that's even possible. We built llama.cpp for vulkan, and it doesn't seem to be able to start any kind of vulkan driver. EDIT: yeah... if I hard-code a |
It's a bit weird that the Also weird: |
Okay so the environment where the unit tests pass and the environment where the integration test fails are not very alike after all. Another way of reproducing the issue without running our code, is to run |
Can we check for the availability of vulkan drivers maybe? |
There is functionality to query for Vulkan-compatible devices, without crashing through the VK_KHR_portability_enumeration extension. That was actually supported by I think we should document the error and try to apply an upstream fix. |
...this was also an issue before using the ggml stuff to enumerate GPUs, back when we were using wgpu for the same task.
Our code panics because a C++ exception is thrown:
This is reproducible in github ci, on this PR: #71
It also happens on my machine, when running the integration tests in the nix sandbox. (The same integration test runs fine outside of the nix sandbox, using the exact same build files).
Since we can't catch C++ exceptions in rust, we need to come up with some sort of guard clause. Either that, or catch it on the other side of the FFI barrier.
The text was updated successfully, but these errors were encountered: