Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Can it support macos ? M2 chip. #1921

Closed
znsoftm opened this issue Dec 4, 2023 · 4 comments
Closed

Can it support macos ? M2 chip. #1921

znsoftm opened this issue Dec 4, 2023 · 4 comments

Comments

@znsoftm
Copy link

znsoftm commented Dec 4, 2023

Asas

@simon-mo
Copy link
Collaborator

simon-mo commented Dec 4, 2023

Currently vLLM doesn't support macOS or windows native. It is currently designed to support NVIDIA with AMD GPU support in progress.

If you want to use m2 chip right now, I would recommend using llama.cpp for now.

But we are definitely open for ideas and contributions here! AFIAK, PyTorch's macOS support is pretty behind?

@pathorn
Copy link

pathorn commented Dec 22, 2023

Dupe of #1397

I just made a PR #2244 - feel free to test and give feedback, but this issue should probably be closed as duplicate.

@bluenevus
Copy link

+1

@hmellor
Copy link
Collaborator

hmellor commented May 31, 2024

Closing in favour of #2081

@hmellor hmellor closed this as not planned Won't fix, can't repro, duplicate, stale May 31, 2024
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

5 participants