-
Notifications
You must be signed in to change notification settings - Fork 0
Docker
- Start and enable docker
- Start it using
$ systemctl start docker
- Enable Docker automatically starting upon reboot using
$ systemctl enable docker
- Start it using
- View, and delete containers
- View the containers:
$ docker ps -a
- Containers are stored in
/var/lib/docker/containers
- Containers are stored in
- Delete a container:
$ docker rm IMAGEID
- View the containers:
- Copy a file to a container using
$ docker cp hostsrcfile container_id:containerdstfile
- Save the state of a container using
$ docker commit container_id username/imagename
- Optionally you can include commit messages with
-m 'message details in quotes'
- Optionally you can include commit messages with
- See the commit history of a container (similar to VMware snapshots)
$ docker history imagename
- Revert to a previous commit (similar to reverting to a VMware snapshot)
- `$ docker tag tagnumber imagename
- Run or attach to running containers
- Create new container and run a command in it using
$ docker run container_image /bin/bash
- Create a new container and run a container interactively in it using
$ docker run -t -i container_image /bin/bash
-
-d
-containers started in detached mode exit when the root process used to run the container exits -
-d=false
-starts container in foreground mode (default)
-
- Attach to a container terminal’s standard input, output, and error
- Options below can be included in the run command above
-
-a=[]
-Attach toSTDIN
,STDOUT
and/orSTDERR
-
-t
-Allocate a pseudo-tty -
-i
-Keep STDIN open even if not attached
- Create new container and run a command in it using
- Leave and shutdown running containers (be sure to first save its state)
- When attached, exit it using
$ exit
- Gracefully stop a running container from outside the attached terminal using
$ docker stop [option] container_id
- Kill a running container from outside the attached terminal using
$ docker kill [option] container_id
- When attached, exit it using
- Create the device:
- VM->Settings, Add new device
- Specify Hard Disk and select next
- Specify SCSI and select next
- Specify Create a new virtual disk and select next
- Specify the size (I typically use 50GB) and keep the virtual disk as a single file, then select next
- Name the disk file -Docker and save it in the same directory as your VM (e.g., Kali 2020.3-docker) and select finish
- Reboot the VM
- Prepare the partition
- Run gparted to determine the device of the new disk (
$ sudo gparted
) - New device should be unallocated and the same size that was specified above (e.g.,
/dev/sdb
) - Create a msdos partition table on the device using Device->Create Partition Table
- Create a new ext4 partition on it using Partition->New and specify docker for Label, ext4 for File system
- Select apply all operations (checkbox in toolbar)
- Close gparted
- Run gparted to determine the device of the new disk (
- Your new partition will show up in /dev with a 1 appended to the device (e.g., /dev/sdb1)
- Create a new hard disk device and ext4 partition as described above (e.g., /dev/sdb and /dev/sdb1)
- If you have installed Docker already
- Backup current folder:
# mv /var/lib/docker /var/lib/docker-backup && mkdir /var/lib/docker
- Mount the new ext4 partition:
# mount /dev/sdb1 /var/lib/docker
- Copy contents from original into new folder:
# cp -rf /var/lib/docker-backup/* /var/lib/docker
- Backup current folder:
- Otherwise
- Create a directory for the mount:
$ sudo mkdir /var/lib/docker
- Create a directory for the mount:
- Setup new ext4 partition to automount by editing
/etc/fstab
and adding the line below:
/dev/sdb1 /var/lib/docker ext4 defaults 0 1
- It is possible that it failed previously during a build and the network was down but it has cached at a bad state. Run the build and add the
--no-cache
option to force starting the build from a clean state. - It might help to specify DNS servers for Docker, see the section entitled 'Specify DNS servers for Docker' here
- Hints for debugging a failing build are here
See the discussion here. Around 2021, some Linux distributions began migrating to using cgroup v2. Many of our Docker containers were originally setup to work with earlier versions. The problem will manifest when attempting to run one of these older Docker container setups on a Linux distribution using cgroup v2. Symptoms of the failed Docker run will likely include:
...
Failed to create /init.scope control group: Read-only file system
Failed to allocate manager object: Read-only file system
[!!!!!!] Failed to allocate manager object.
Exiting PID 1...
...
The root cause is that "when systemd sees a unified cgroupfs at /sys/fs/cgroup it assumes it should be able to write to it which normally should be possible but is not the case here". When the issue arises, you'll need to modify the Dockerfile and also the commands used to run it as we did in our termsvr application here.
- Official documentation
- Documentation for creating containers:
- Documentation for starting a container