Building Heroku buildpack binaries with Docker
This post covers how binaries are created for the Heroku Mono buildpack, in particular the Mono runtime and XSP/fastcgi-mono-server components that are vendored into app slugs. Binaries were previously built using a custom buildpack with the build running in a Heroku dyno. That took a while though and was error-prone and hard to debug, so I have migrated the build scripts to run inside containers set up with Docker. Some advantages of this approach are:
- It’s fast, builds can run on my powerful laptop
- It’s easy to get clean Ubuntu Lucid (10.04) image for reproducible clean-slate builds
- Debugging is easy too, using an interactive Docker session
The two projects used to build Mono and XSP are on GitHub. Building is a two-step process: First, we do a one-time Docker build to create an image with the bare-minimum (eg. curl, gcc, make) required to perform Mono/XSP builds. s3gof3r is also added – it’s used a the end of the second step to upload the finished binaries to S3 where buildpacks can access them.
The finished Docker images have build
scripts and can now be used to run builds at will. Since changes are not persisted to the Docker images each new build happens from a clean slate. This greatly improves consistency and reproducibility. The general phases in the second build step are:
- Get source code to build
- Run
configure
orautogen.sh
- Run
make
andmake install
- Package up result and upload to S3
Notice that the build scripts specify the /app
filesystem location as the installation location. This is the filesystem location where slugs are mounted in Heroku dynos (you can check yourself by running heroku run pwd
). This may or not be a problem for your buildpack, but Mono framework builds are not path-relative, and expect to eventually be run out of where they were configured to be installed. So during the build, we have to use a configure setting consistent with how Heroku runs slugs.
The build scripts are parametrized to take AWS credentials (for the S3 upload) and the version to build. Building Mono 3.2.8 is done like this:
$ docker run -v ${PWD}/cache:/var/cache -e AWS_ACCESS_KEY_ID=key -e AWS_SECRET_ACCESS_KEY=secret -e VERSION=3.2.8 friism/mono-builder
It’s now trivial to quickly create and upload to S3 binaries for all the versions of Mono I want to support. It’s very easy to experiment with different settings and get binaries that are small (so slugs end up being smaller), fast and free of bugs.
There’s one trick in the docker run
command: -v ${PWD}/cache:/var/cache
. This mounts the cache folder from the current host directory in the container. The build scripts use that location to store and cache the different versions of downloaded source code. With the source code cache, turnaround times when tweaking build settings are even shorter because I don’t have to wait for the source to re-download on each run.
Leave a Reply