-
-
Notifications
You must be signed in to change notification settings - Fork 4.3k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
[Misc]: More patch versions #9200
Comments
We aim to have release every two weeks but sometimes blocked by major features or model releases. We believe our nightly versions should be fairly stable, which that suffice? |
You can find instructions on grabbing the nightly releases in a Note on this page: https://docs.vllm.ai/en/stable/getting_started/installation.html |
Oh, I just noticed the release of the nightly versions—thanks for that! I suppose there aren’t any nightly Docker images available, right? |
right - I haven't seen any |
|
@ycool Thank you so much! Really appreciate it! |
Anything you want to discuss about vllm.
Hi team,
It seems that vllm version 0.6.2 is having some issues when used with models like Qwen-VL-2, Pixtral, and others (#9068, #9091...).
Most of the bugs have already been fixed in the current dev branch, but they haven’t been included in a new version yet, which makes it a bit tricky to use in a production environment.
I'm really grateful for all the hard work you're putting into this project! It would be awesome if there could be more frequent version releases, especially patch versions that address bug fixes.
Thanks a lot!
Michael
Before submitting a new issue...
The text was updated successfully, but these errors were encountered: