Skip to content

Commit

Permalink
[Doc] Add instructions on using Podman when SELinux is active (vllm-p…
Browse files Browse the repository at this point in the history
…roject#12136)

Signed-off-by: Yuan Tang <terrytangyuan@gmail.com>
Signed-off-by: Bowen Wang <abmfy@icloud.com>
  • Loading branch information
terrytangyuan authored and abmfy committed Jan 24, 2025
1 parent 004d864 commit 80921ad
Showing 1 changed file with 3 additions and 0 deletions.
3 changes: 3 additions & 0 deletions docs/source/deployment/docker.md
Original file line number Diff line number Diff line change
Expand Up @@ -42,6 +42,9 @@ DOCKER_BUILDKIT=1 docker build . --target vllm-openai --tag vllm/vllm-openai
By default vLLM will build for all GPU types for widest distribution. If you are just building for the
current GPU type the machine is running on, you can add the argument `--build-arg torch_cuda_arch_list=""`
for vLLM to find the current GPU type and build for that.
If you are using Podman instead of Docker, you might need to disable SELinux labeling by
adding `--security-opt label=disable` when running `podman build` command to avoid certain [existing issues](https://github.com/containers/buildah/discussions/4184).
```

## Building for Arm64/aarch64
Expand Down

0 comments on commit 80921ad

Please sign in to comment.