forked from ggml-org/llama.cpp
-
Notifications
You must be signed in to change notification settings - Fork 427
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Throws an error when using Qwen2-vl On Mac #1273
Comments
Here's the error: ggml_metal_encode_node: error: unsupported op 'ROPE'
ggml/src/ggml-metal/ggml-metal.m:1263: unsupported op |
I don't think wholesale disabling the GPU is a good idea. I'll wait and see if upstream develops a better solution - otherwise I'll only run clip on CPU for this specific model on macOS. |
Isn't that what the fix did for now? Only run clip on cpu on mac? |
Fixed, please update |
Thanks it works now. |
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
There was a bug that prevents from Qwen2-vl to run on Mac. The fix has been merged to llama.cpp yesterday.
ggml-org#10896
Thanks!
The text was updated successfully, but these errors were encountered: