-
-
Notifications
You must be signed in to change notification settings - Fork 5.8k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
[Bug]: When using the latest 0.6.3, No module named 'vllm._version' appears #9421
Comments
Same issue with mistral-nemo-instruct-2407 and llama31-8b-instruct. |
I will release a new version once this is fixed... @dtrifiro I think this is due to the fact I hard coded version in def get_vllm_version() -> str:
return "0.6.3." # i did this, which skipped _version.py writing.
version = get_version(
write_to="vllm/_version.py", # TODO: move this to pyproject.toml
) |
I don’t know if the following is the same problem? |
same issue with glm-4v-9b |
+1 |
same issue with qwen2-vl-instruct |
+1 |
I will make a patch release once #9375 merged |
I use the following command to solve this issue: pip install "vllm>=0.4.3,<0.6.4" |
how is now, |
+1 |
Your current environment
The output of `python collect_env.py`
Model Input Dumps
No response
🐛 Describe the bug
Start service:
vllm serve /models/huggingface.co/meta-llama/Llama-3-8b-hf/
A warning appears:
/usr/local/lib/python3.10/dist-packages/vllm/connections.py:8: RuntimeWarning: Failed to read commit hash: No module named 'vllm._version' from vllm.version import __version__ as VLLM_VERSIO
Before submitting a new issue...
The text was updated successfully, but these errors were encountered: