Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

dl_checkpoints.sh - restore docker pull for mutable docker tags #375

Merged
merged 1 commit into from
Dec 18, 2024
Merged
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
18 changes: 15 additions & 3 deletions runner/dl_checkpoints.sh
Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

docker image tag is a temporary solution. The proper one is to make var livePipelineToImage configurable via envs/CLI/ai-models.json

Original file line number Diff line number Diff line change
Expand Up @@ -128,8 +128,12 @@ function build_tensorrt_models() {
# StreamDiffusion (compile a matrix of models and timesteps)
MODELS="stabilityai/sd-turbo KBlueLeaf/kohaku-v2.1"
TIMESTEPS="3 4" # This is basically the supported sizes for the t_index_list
AI_RUNNER_STREAMDIFFUSION_IMAGE=${AI_RUNNER_STREAMDIFFUSION_IMAGE:-livepeer/ai-runner:live-app-streamdiffusion}
docker pull $AI_RUNNER_STREAMDIFFUSION_IMAGE
# ai-worker has tags hardcoded in `var livePipelineToImage` so we need to use the same tag in here:
docker image tag $AI_RUNNER_STREAMDIFFUSION_IMAGE livepeer/ai-runner:live-app-streamdiffusion
docker run --rm -v ./models:/models --gpus all -l TensorRT-engines \
${AI_RUNNER_STREAMDIFFUSION_IMAGE:-livepeer/ai-runner:live-app-streamdiffusion} \
$AI_RUNNER_STREAMDIFFUSION_IMAGE \
bash -c "for model in $MODELS; do
for timestep in $TIMESTEPS; do
echo \"Building TensorRT engines for model=\$model timestep=\$timestep...\" && \
Expand All @@ -138,8 +142,12 @@ function build_tensorrt_models() {
done"

# FasterLivePortrait
AI_RUNNER_LIVEPORTRAIT_IMAGE=${AI_RUNNER_LIVEPORTRAIT_IMAGE:-livepeer/ai-runner:live-app-liveportrait}
docker pull $AI_RUNNER_LIVEPORTRAIT_IMAGE
# ai-worker has tags hardcoded in `var livePipelineToImage` so we need to use the same tag in here:
docker image tag $AI_RUNNER_LIVEPORTRAIT_IMAGE livepeer/ai-runner:live-app-liveportrait
docker run --rm -v ./models:/models --gpus all -l TensorRT-engines \
${AI_RUNNER_LIVEPORTRAIT_IMAGE:-livepeer/ai-runner:live-app-liveportrait} \
$AI_RUNNER_LIVEPORTRAIT_IMAGE \
bash -c "cd /app/app/live/FasterLivePortrait && \
if [ ! -f '/models/FasterLivePortrait--checkpoints/liveportrait_onnx/stitching_lip.trt' ]; then
echo 'Building TensorRT engines for LivePortrait models (regular)...'
Expand All @@ -155,8 +163,12 @@ function build_tensorrt_models() {
fi"

# ComfyUI (only DepthAnything for now)
AI_RUNNER_COMFYUI_IMAGE=${AI_RUNNER_COMFYUI_IMAGE:-livepeer/ai-runner:live-app-comfyui}
docker pull $AI_RUNNER_COMFYUI_IMAGE
# ai-worker has tags hardcoded in `var livePipelineToImage` so we need to use the same tag in here:
docker image tag $AI_RUNNER_COMFYUI_IMAGE livepeer/ai-runner:live-app-comfyui
docker run --rm -v ./models:/models --gpus all -l TensorRT-engines \
${AI_RUNNER_COMFYUI_IMAGE:-livepeer/ai-runner:live-app-comfyui} \
$AI_RUNNER_COMFYUI_IMAGE \
bash -c "cd /comfyui/models/Depth-Anything-Onnx && \
python /comfyui/custom_nodes/ComfyUI-Depth-Anything-Tensorrt/export_trt.py && \
mkdir -p /comfyui/models/tensorrt/depth-anything && \
Expand Down
Loading