Containers as Build Environments

Building open-source software from source is not necessarily hard: after all, typing make is fairly easy. But dealing with the tools and dependencies can be tedious, in particular, if you don’t use them all the time. In this post, I want to describe how to use Docker containers as convenient, clean-room build environments.

This post assumes that you already have a container runtime installed and running. On my Linux distribution, if you install the docker runtime package, it registers itself with the system and is automatically being started at boot time; on other systems, you may have to start the daemon manually.

A Sterile Installation

The gnuplot team is getting to release a new version (Version 6) of their plotting and visualization software, and I wanted to check it out. At this point, there is no official release, and hence no pre-built binaries, but there is a tarball with the release candidate.

The issue is that in order to build the release candidate from the tarball the system needs to have the development versions of a great number of libraries, in particular graphics and UI widget libraries — and their dependencies! At this point, I am not planning to hack on gnuplot regularly, and therefore I am reluctant to install all the required dependencies onto my main working machine.

This is where containers come in. I can start a container, install all the dependencies into the container, and build gnuplot there. When the container shuts down, it will take all the installed dependencies with it, leaving my machine essentially untouched.

The first steps are to pull an appropriate base image, and then to start the container:

# On host system:

docker image pull ubuntu:jammy
docker container run --rm -v /opt:/opt -it ubuntu

As base image, I choose Ubuntu, which is upstream from my locally installed Linux distribution, then start the container. The --rm flag indicates that the container should be entirely removed, once it has stopped running; the -it option starts an interactive terminal session. The -v option is a bind mount, which makes the local /opt directory available in the container. The compiled binary will be installed here. (Bind mounts like this are a convenient way go get files into and out of containers.)

Once the container starts, it presents us with a shell prompt. The following commands are all issued inside the container:

# Inside container:

apt update
apt install wget make gcc g++
apt install libreadline-dev
apt install libwxgtk3.0-gtk3-dev libcairo2-dev libpango1.0-dev libwebp-dev

wget https://sourceforge.net/projects/gnuplot/files/gnuplot/testing/gnuplot-6.0.rc2.tar.gz
tar xzf gnuplot-6.0.rc2.tar.gz
cd gnuplot-6.0.rc2

./configure --prefix=/opt/gnuplot6 --without-qt --without-lua --without-latex

make
make install

First, we use the standard apt workflow to install some tools and the required development libraries. (Only the top-level packages are named, apt will pull in a large number of dependencies, of course.)

Then, we download the tarball into the container and unpack it. The wget tool is fortunately smart enough to follow any redirects that Sourceforge may throw our way. (For other projects, we might issue a git clone or similar command here.)

Now comes a bit of arcana: gnuplot is a C application, and uses the GNU autoconf/automake tools to tailor the build process for a given target system. The ./configure script tests for the availability of various libraries and language features. It accepts a large number of options: some are generic, and some are specific to the gnuplot project; these are documented in the INSTALL file in the tarball. Here, I disable various optional compile-time features (I would have to download additional libraries to build them), and then I choose the directory where the finished binary and its supporting files will be installed, using the --prefix option. Remember that the host’s /opt directory is available inside the container, via the bind mount.

Now run make to compile, and make install to copy the finished binary and its supporting files (documentation, etc) to the target directory.

The container can now be shut down, leaving behind nothing, except the installed files under /opt/gnuplot6, for a completely clean install.

An Isolated Development Environment

In the previous example, all we wanted was a clean install, with as little left-over debris as possible. But it is also possible to use containers as build and development environments for a project under active development. I will again use the gnuplot release candidate as an example, but the same logic applies to the tip of a development branch.

In this case, I assume that there is a directory of source files and that I want to retain this directory and its contents, because I am editing the files in it. At the same time, I also need access to an installation directory on the host system, where the compiled binaries will be installed.

Assuming that the tarball has already been downloaded (and that an appropriate image is already available in the local registry), the following commands would be executed on the host system:

# On host system:

tar xzf gnuplot-6.0.rc2.tar.gz
cd gnuplot-6.0.rc2

docker container run --rm -v `pwd`:/gnuplot -v $HOME/bin:$HOME/bin --env HOSTHOME=$HOME --env HOSTGID=`id -g` --env HOSTUID=`id -u` -it ubuntu

The docker command in this case contains two bind mounts. One makes the current directory (containing the source files) available to the container as /gnuplot. The other maps the bin directory in my home directory to the corresponding location inside of the container.

I also carry over some information from the host system into the container, for portability reasons via environment variables. Each --env option creates an environment variable inside the container and sets its value.

Inside the container, we first install tools and libraries as before, but then also add a user (inside the container) that shares group and user IDs with the user on the host system. The su command is used to substitute this new user as the one to perform the actual compilation.

Inside container:

apt update
apt install make gcc g++
apt install libreadline-dev
apt install libwxgtk3.0-gtk3-dev libcairo2-dev libpango1.0-dev libwebp-dev

addgroup --gid $HOSTGID user
adduser --uid $HOSTUID --gid $HOSTGID --home $HOSTHOME --disabled-password --gecos "" user

su user

cd /gnuplot
./configure --prefix=$HOSTHOME/bin/gnuplot6 --without-qt --without-lua --without-latex

make
make install

Because the active user inside the container shares group and user ID with the user on the host system, all build artifacts in the working directory gnuplot-6.0.rc2, as well as in the user’s bin directory, are owned by the host user, and can be modified or removed by the user on the host without any special permissions. Only the actual build process (and the tools and libraries required by it) exist inside the container. All persistent files and data live on the host system, giving a seamless user experience.

Two more points:

  • For ongoing development, one may want to shutdown and restart the container occasionally. For that, it is useful to create a new container image that includes the installed dependencies and the new user information. One may either use a Dockerfile, or use the docker commit command for this purpose.
  • In the example above, I use two bind mounts, each one for a relatively restricted subdirectory. I guess one could instead use just a single bind mount for the user’s entire home directory. But this seems unnecessary; the idea of minimal access certainly fits better with the overall container concept.

Credits

I got the idea of using containers (or VMs) as build environments, while at the same time keeping all files strictly on the host’s filesystem from a blog post, which I have since lost, and can’t seem to find anymore. Credit to the Unknown Author.

Although possibly a bit old hat by now, the elegance of the concept, and the seamlessness of the experience, continue to impress me. It deserves to be more widely known — until everybody knows it, in fact.