I’ve been working my way through James Turnbull’s excellent The Docker Book (verion 1.9.1). I just finished the chapter on using Docker for testing, which ended with deploying test suites through Jenkins CI.

Some example exercises in the book deploy Jenkins CI in a container, and that container is also running the Docker daemon. The builds submitted to Jenkins each create one or more child containers using that containerized Docker daemon and run the tests in those child containers. So, I have a Linux VM running the Docker daemon hosting a container that is also running a Docker daemon hosting child containers that are running Jenkins builds. Docker in Docker.

I followed a link, helpfully provided in The Docker Book, to the Docker-in-Docker GitHub project and find that with current versions of Docker (I’m using 1.11) this Docker-in-Docker approach is no longer needed1 to support containers for the Jenkins CI example. In fact, the author of the Docker-in-Docker project warns about the safety of using it for a CI system, being specifically concerned about the likely possibility of data corruption.

Safe alternative: use the host Docker daemon from within a container

The simple solution to allow a container process to start another container is to share the Docker socket of the host with the container, doing something like this:

docker run -v /var/run/docker.sock:/var/run/docker.sock -d --name some_container some_image

Assuming that Docker is installed in the image invoked, then the Docker client run within that container will be communicating with the Docker daemon running on the host.

Any containers created from within some_container would actually be created by the same Docker daemon that created some_container itself. Those new containers would be sibling containers, siblings to some_container.

Now there should be no worries about data corruption from nested storage drivers, or shared access to the Docker image cache.

Managing data volumes

There is one thing though. With sibling containers, data volumes will not work like you probably would like them to.

In the Jenkins CI example from The Docker Book, a directory subtree created by Jenkins for a build, the “workspace” of the build, is mapped to a specific path in the container launched from the build. The relevant script lines are:

# Build the directory to be mounted into Docker.
#
MNT="$WORKSPACE/.."

# Execute the build inside Docker.
#
CONTAINER=$(docker run -d -v "$MNT:/opt/project" $IMAGE /bin/bash -c "cd /opt/project/$OS && rake spec")

The problem here is that the path $WORKSPACE refers to the filesystem inside the Jenkins CI container. That path is not visible to the Docker daemon running on the host and executing the docker run -d -v "$MNT:/opt/project" ... command.

So what can we do? My solution to this issue is—wait for it—to use another container, a data volume container.2 What I did was create a data volume container that will be mapped to where Jenkins allocates all the build workspaces, and then mount that data volume container on the Jenkins CI container and on all the containers it creates.

I create the data volume container3 once, using a command like this:

docker run --name dv_jenkins_workspace \
  -v /opt/jenkins/data/workspace gliderlabs/alpine:latest \
  /bin/true

The data volume path, /opt/jenkins/data/workspace, is the location where Jenkins will create workspaces for jobs.

I then launch the Jenkins CI container, using the --volumes-from option to map the data volume container, with a command like this:

docker run -d -p 8080:8080 --name jenkins \
  --volumes-from dv_jenkins_workspace \
  -v /var/run/docker.sock:/var/run/docker.sock \
  datihein/jenkins-dockercli

When the build script of a Jenkins job launches a container, it also uses the --volumes-from option in the same way, and it executes an ln -s command in the container to map the path of the data volume to the place expected by the build scripts:

# Build the image to be used for this job.
#
IMAGE=$(docker build . | tail -1 | awk '{ print $NF }')

# Execute the build inside Docker.
#
CMD_="mkdir -p /opt/project"
CMD_="$CMD_ && ln -s ${WORKSPACE} /opt/project/workspace"
CMD_="$CMD_ && cd /opt/project/workspace"
CMD_="$CMD_ && rake spec"
CONTAINER=$(docker run -d --volumes-from dv_jenkins_workspace \
  "$IMAGE" /bin/bash -c "$CMD_")

The Jenkins CI example, recast to use sibling containers

I’ve completely reworked the two Jenkins CI examples from the Testing with Docker chapter of James Turnbull’s The Docker Book (version 1.9.1). I’ve forked the Git repository he used in the book so that there is a version that stays synchronized with this example.

The reworked example is in it’s own GitHub repository, https://github.com/JeNeSuisPasDave/example-docker-jenkins.

You can clone that repository and run both the single shell example and the multi-shell example using the same Jenkins procedures as outlined in The Docker Book. The README describes how to build and launch the Jenkins CI container and the data volume container.

Note that I don’t walk through the actual demonstration of using Jenkins CI. For that you should follow the procedures given in The Docker Book. No interaction with Jenkins has to change – other than using the jenkins_single_shell_step and jenkins_multi_shell_step, as indicated in the README.

  1. The version of The Docker Book that I’m using was based on Docker 1.9, a version that, I think, did not support sharing the Docker socket.

  2. For more on data volumes and data volume containers, see https://docs.docker.com/engine/userguide/containers/dockervolumes/.

  3. I use the gliderlabs/alpine image for this because it is so very, very small, an order of magnitude smaller than the ubuntu image, for example.