-
Notifications
You must be signed in to change notification settings - Fork 1
Travis CI
One of the possible use-cases of docker
is its integration with Travis CI. As the documentation reports, enabling docker has consequences on the selection of the virtualization environment and its features.
Among many pros, this docker way gives the freedom to chose the target linux distribution, removing the constrained choice of the few Ubuntu versions supported out-of-the-box by Travis CI. This could be very useful if the project needs packages, build tools, or framework that are not available or outdated in the default Ubuntu version. Do you need to test the DUT against other distributions? Just use / create a fedora, red hat, opensuse image, and you're ready to go.
In this short overview, general guidelines on how to properly integrate docker into the .travis.yml
configuration file are presented. Since most of our projects use CMake
, only out-of-tree examples will be introduced. Before continuing, reading this primer on docker+travis is advised.
services:
- docker
Docker allows you to ship all the dependencies with the image. sudo
could be safely disabled, and a container-based virtualization environment will be used:
sudo: false
The images can be downloaded from Docker Hub in the before_install
step:
before_install:
# Pull the docker containers
- docker pull provider/baseimage1
- docker pull provider/baseimage2
Considering the current development state of these images (highly WIP), two different scenarios can be outlined:
- Ephemeral containers: this scenario is valid when an image contains all the dependencies the project (or DUT) needs
- Persistent containers: this scenario is valid when the DUT requires dependencies that are not shipped with the docker image
As reference, consider the robotology/gazebo-yarp-plugins project. Its .travis.yml
executes every required command in a clean container, that is deleted after the process linked to the executed command dies.
The three build phases (cmake
, make
, make install
) share the folder of the DUT, that is made persistent by mounting it inside the container as a docker volume. The travis' variable TRAVIS_BUILD_DIR
contains the absolute path where the project's git tree has been cloned, and it is the persistent folder that will be used. It is worth noting that this approach uses a fresh operating system for every operation, allowing the execution of commands in isolated environments.
The basic steps are the following:
The configuration of the DUT is performed in the before_script
step. A minimal example is the following:
before_script:
# Run CMake into the persistent $PROJECT_DIR_ABS folder
- cd $PROJECT_DIR_ABS
- mkdir build
- >-
docker run -it --rm
-v "$PROJECT_DIR_ABS:/app"
-w /app
provider/baseimage
sh -c 'cd build && cmake ./..'
When the project is configured, the container dies and the build/
folder is maintained into $PROJECT_DIR_ABS/build
.
P.S. If a background process is required by the testing routine (e.g. yarpserver
or ROS
master node), in this step a detached container (docker run -it -d (...)
) could be spawned. To facilitate its removal, use the --name
option to assign a name.
A new container now can build the project. The process is similar to the previous one, but now it will be executed inside the script
step:
script:
# Build the project
- >-
docker run -it --rm
-v "$PROJECT_DIR_ABS:/app"
-w /app
provider/baseimage
sh -c 'cd build && make'
After the project has been build out-of-tree, it is possible to test its installation in another clean container. Usually the install step doesn't fail; its location could be either in the script
or after_script
step.
docker run -it --rm
-v "$PROJECT_DIR_ABS:/app"
-w /app
provider/baseimage
sh -c 'cd build && make install'
P.S. If a background container is present, it should be stopped and removed inside the after_script
step:
after_script:
# Stop and remove running containers
- docker stop containername
- docker rm containername
This step is not mandatory since the running Virtual Environment will be deleted in any case after the end of the test.
Often happens that the pre-built images don't contain all the needed software to build and test the project. In this case, an additional install_deps
step should be performed before the ones above. After this step the container won't be deleted, and a snapshot of the base image + the installed dependencies will be saved.
In the before_install
step, after pulling the base image(s), the dependencies can be installed inside it as follows:
before_install:
# Pull the docker containers
(...)
# Install the dependencies
- >-
docker run -it
-v "$PROJECT_DIR_ABS:/app"
-w /app
--name baseimagename
provider/baseimage
sh -c 'sh .ci/install_deps_docker.sh'
- docker commit baseimagename imagewithdependencies
- docker rm baseimagename
The .ci/install_deps_docker.sh
script should be developed, containing all the commands to fetch, build, and install the dependencies inside the container. In order to create a snapshot of the container with the installed dependencies after the script has finished, the --rm
option shouldn't be used. With docker commit
the snapshot is created; this new imagewithdependencies
image will now take place of the initial provider/baseimage
, and the DUT will be configure, built, and installed inside it. Eventually, the stopped baseimagename
can be deleted.
Now, the cmake
, make
, and make install
steps match exactly the first scenario. The only difference resides in the image from which these three steps generate their container: imagewithdependencies
instead of provider/baseimage
.
E.g. for the cmake
step:
before_script:
# Run CMake into the persistent $PROJECT_DIR_ABS folder
- cd $PROJECT_DIR_ABS
- mkdir build
- >-
docker run -it --rm
-v "$PROJECT_DIR_ABS:/app"
-w /app
imagewithdependencies
sh -c 'cd build && cmake ./..'