-
-
Notifications
You must be signed in to change notification settings - Fork 4.3k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Can it support macos ? M2 chip. #1921
Comments
Currently vLLM doesn't support macOS or windows native. It is currently designed to support NVIDIA with AMD GPU support in progress. If you want to use m2 chip right now, I would recommend using llama.cpp for now. But we are definitely open for ideas and contributions here! AFIAK, PyTorch's macOS support is pretty behind? |
+1 |
Closing in favour of #2081 |
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
Asas
The text was updated successfully, but these errors were encountered: