This project provides a user-agnostic Docker image that allows users to run containers with their user and group privileges automatically mapped inside the container at runtime. This means that the same docker image can be shared by all users.
This image also includes Miniconda for managing Python environments and the provided setup maps every conda environment to a permanent storage location, which ensures that conda environments can be used and modified within the container while also persisting across container restarts.
Key features:
- Automatic User Privilege Mapping: The container dynamically maps the host user's UID and GID, along with their group memberships, ensuring proper file permissions and access control.
- Persistent Conda Environments: Conda environments are stored in a mounted volume, allowing them to persist even when the container is stopped or recreated.
- Single Image for Multiple Users and Projects: Build the image once and use it across different users and projects, saving disk space and time.
- Pre-installed Basic Packages: Essential
apt
packages and utilities are pre-installed. Additional packages can be added as needed by extending this image.
To build the Docker image, run in this directory:
docker build -t $(whoami)/cuda-miniconda:latest .
This command builds the Docker image using the Dockerfile
in the current directory and tags it as <username>/cuda-miniconda:latest
.
Use the run-docker-conda
script to run the container.
run-docker-conda
command behaves exactly as run-docker
, accepting the same CLI arguments.
It automatically mounts the current directory as /exp
, and can be fed with an optional .runconfigs
at need.
Additional arguments are presented below.
Basic example:
./run-docker-conda '' '0-3' /bin/bash
opens a bash shell in your container running on CPU 0-3 and no GPU.
Note: Ensure the run-docker-conda
script is executable by running
chmod +x run-docker-conda
before using it.
You can also install this script in your local bin folder to be able to run it from every directory. Run:
cp run-docker-conda ~/.local/bin/run-docker-conda
and make sure that .local/bin
is in your path by adding this line to your ~/.bashrc
:
export PATH=$HOME/.local/bin:${PATH:+:${PATH}}
Now you can just run run-docker-conda
from any directory!
Usage: run-docker-conda [OPTIONS] <gpu-list> <cpu-list> <command>
--image <image_name>
: Specify the Docker image to use (default:<username>/cuda-miniconda:latest
).--name <container_name>
: Specify the name for the running container. If not provided, the default name set byrun-docker
will be used.--project <project_dir>
: Path to the project directory to mount in/exp
(default: current directory, as withrun-docker
).--conda-envs <envs_dir>
: Path to the directory where your conda environments are stored (default:/multiverse/storage/<username>/conda_envs/
). This is used to persist conda environments across runs.--docker-args <args>
: Additional arguments to pass to Docker, as withrun-docker
. These can include port mappings, environment variables, or other Docker runtime options.-h, --help
: Show the help message detailing usage and options.
Additionally, any option specified in a .runconfigs
file will be loaded directly by run-docker
.
As for command run-docker
:
<gpu-list>
: Comma-separated list of GPUs to use. Leave empty (''
) for no GPU restriction.<cpu-list>
: Comma-separated list of CPU cores to use. Leave empty (''
) for no CPU restriction.<command>
: Command to run inside the container (default:'bash'
). This can be any command, such as running a script or launching a Python shell.
In your container, you can use conda seemlessly to create new environmnets and update them over time.
Your conda environments are mounted in /envs
inside your container, and permanently stored in storage (default: /multiverse/storage/<username>/conda_envs/
). This ensures that any environments you create or modify will persist across container restarts.
-
Initialize Conda (NOT needed if using the provided bashrc):
conda init bash exec bash
-
Create the Environment:
conda create -n myenv python=3.9
Replace
myenv
with your desired environment name andpython=3.9
with the desired Python version. -
Activate the Environment:
conda activate myenv
If you already have existing environments, you can activate them directly:
conda activate existing_env_name
With the environment activated, install packages using conda
or pip
:
-
Using Conda:
conda install numpy pandas
-
Using Pip:
pip install numpy pandas
Note: If you use Pip to install a package in your environment, you cannot use
conda install
later on in that environment, due to conda internal functioning.
Dockerfile
: Builds the Docker image. Contains instructions for installing packages and configuring the environment.entrypoint.sh
: An entrypoint script that runs when the container starts. It sets up the user and group inside the container to match those on the host system.apt_requirements.txt
: A plain text file listing additionalapt
packages to install.bashrc
: Contains custom Bash configurations, aliases, and environment settings.run-docker-conda
: A script to run the Docker container with the appropriate settings, including user and group mappings.
To change CUDA version you'll need to build a new image from scratch chosing a different base image. This can be changed editing the first line in your Dockerfile:
FROM nvidia/cuda:12.2.0-base-ubuntu20.04
Once you have built the base Docker image, you can extend and customize it without having to rebuild the entire image from scratch. This approach is efficient and allows for adding new functionalities as needed for specific projects.
If you need additional apt
packages, you can extend the existing image adding your new packages to apt-requirements.txt
and creating a new Dockerfile to extend this image.
-
Create a New Dockerfile:
FROM <username>/cuda-miniconda:latest # Install extra apt dependencies ADD apt_requirements.txt /apt_requirements.txt RUN apt-get update && cat /apt_requirements.txt | xargs apt-get install -y RUN apt-get clean && rm -rf /var/lib/apt/lists/*
-
Build the Extended Docker Image:
docker build -t <username>/cuda-miniconda:custom_apt .
This way, you can extend the base image by installing extra packages without altering the original.
If you want to add or modify shell behavior, such as adding aliases or environment variables, you can edit the provided bashrc
and create a new Dockerfile to extend this image.
-
Create a New Dockerfile:
FROM <username>/cuda-miniconda:latest # Copy your custom bashrc file COPY bashrc /etc/bash.bashrc RUN chmod 644 /etc/bash.bashrc
-
Build the Extended Docker Image:
docker build -t cudrano/cuda-miniconda:custom-bash .
Note: The bashrc
file is copied to /etc/bash.bashrc
inside the container and is applied globally to all users.
The entrypoint.sh
script performs several key functions when the container starts:
-
User and Group Creation:
- Checks if the user's UID and GID exist inside the container and creates them if they don't.
- Maps the host user's groups into the container to ensure proper permissions.
-
Ownership of Mounted Directories:
- Adjusts ownership of
/exp
and/envs
to match the user, ensuring read/write permissions.
- Adjusts ownership of
-
Suppressing Login Messages:
- Creates a
.hushlogin
file to suppress login messages for a clean shell prompt.
- Creates a
-
Switching to Non-Root User:
- Uses
gosu
to switch from the root user to the mapped user before executing the provided command.
- Uses
Feel free to customize and extend this setup to suit your needs. You can create new images based on this one by adding additional instructions.
-
Creating Derived Images:
Use this image as a base in your own
Dockerfile
:FROM <username>/cuda-miniconda:latest # Install additional packages or configurations
-
Submitting Improvements:
If you make improvements that could benefit others, consider sharing them.
If you plan on attaching VSCode to a running docker container, you should avoid installing a vscode-server from scratch each time a new container is spawned. You can avoid this by ensuring vscode-server is mounted from storage.
To enable this, you can extend your image. A working example can be found in directory vscode/
. Just make sure to edit the first line FROM <username>/cuda-miniconda:latest
with the name of your base image.
You can build this extension as above, running:
cd vscode
docker build -t $(whoami)/cuda-miniconda:vscode .
You can now run a container with VSCode persistence adding flag --vscode
to your run-docker-conda
call:
run-docker-conda '0' '0-4' --vscode /bin/bash
By default, when presented with --vscode
option, run-docker-conda
will try to run image <username>/cuda-miniconda:vscode
. If you build your vscode image with a different name, be sure to pass the correct image name using argument --image
.
run-docker-conda '0' '0-4' --vscode --image my/vscode:image /bin/bash
Script to run docker stats
only on containers of selected user:
docker-stats -u <username>
You can install this script in your local bin folder:
cp docker-stats ~/.local/bin/docker-stats