Skip to content

Commit

Permalink
Dockerfile.ubi: use cuda-base as base for vllm-openai target
Browse files Browse the repository at this point in the history
this adds the cuda runtime in order to fix missing libcudart.so.12
on vLLM libraries
  • Loading branch information
dtrifiro committed Jun 18, 2024
1 parent c127b61 commit 712ffff
Showing 1 changed file with 1 addition and 1 deletion.
2 changes: 1 addition & 1 deletion Dockerfile.ubi
Original file line number Diff line number Diff line change
Expand Up @@ -169,7 +169,7 @@ RUN --mount=type=cache,target=/root/.cache/ccache \
# We used base cuda image because pytorch installs its own cuda libraries.
# However pynccl depends on cuda libraries so we had to switch to the runtime image
# In the future it would be nice to get a container with pytorch and cuda without duplicating cuda
FROM python-install AS vllm-openai
FROM cuda-base AS vllm-openai

WORKDIR /workspace

Expand Down

0 comments on commit 712ffff

Please sign in to comment.