-
-
Notifications
You must be signed in to change notification settings - Fork 4.3k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
[Model] Initialize deepseek-vl support #5817
base: main
Are you sure you want to change the base?
[Model] Initialize deepseek-vl support #5817
Conversation
Contributed by enflame-tech |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Thanks for the contribution! I have a few initial comments.
Apart from that, can you add a test case (similar to test_llava.py
) to test the correctness of the model in CI?
now support deepseek-ai/deepseek-vl-7b-chat deepseek-ai/deepseek-vl-1.3b-chat |
This model depends on timm>=0.9.16, which depends on torch, but it will conflict with the dependencies of other third-party components and cause the pipeline to fail. Therefore, running this model requires additional installation. I don’t know if this is appropriate. In addition, it depends on many modules of timm, which is difficult to remove. |
Can you implement the individual timm modules inside vLLM? (where possible, you should use vLLM-specific layers to improve the performance anyway) |
OK,I will try to do this and I think it will take some time |
You can make use of our implementation of |
This test [tests/models/test_deepseek_vl.py] case depends on the project https://github.com/deepseek-ai/DeepSeek-VL, and it seems that pip installation will fail when build the docker .I think it is possible not to add this test case And The [examples/deepseek_vl_example.py] can run successfully. |
In this case it won't function for users of vLLM either since they can't install it (so you should still keep the tests). Can you figure out which dependency is causing the issue? |
|
You can manually register the model to HuggingFace inside the test case. |
Ok, I'll try it |
The Test models/test_deepseek_vl.py failed, but no exception stack was thrown. I don’t know what happened. The program seems to be terminated. Have you encountered similar problems? |
The stack trace is shown near the end of the CI logs: https://buildkite.com/vllm/ci-aws/builds/4404#0190956c-f526-40a4-b2af-232d40ffbd0c |
buildkite/fastcheck/pr/tensorizer-metrics-tracing-test — Failed (exit status 1) |
It's unrelated to this PR. |
…ech/vllm into deepseek-vl-7b-chat
Test On NVIDIA L40S
FIX #3356
FIX #4982