Mono Docker language stack

I couple weeks ago, Docker announced official pre-built Docker images for a bunch of popular programming languages. Each stack generally consists of two Dockerfiles: a base Dockerfile that installs system dependencies required for that language to run, and an onbuild Dockerfile that uses ONBUILD instructions to transform app source code into a runnable Docker image. As an example of the latter, the Ruby onbuild Dockerfile runs bundle install to install libraries specified in an app’s Gemfile.

Managing system dependencies and composing apps from source code is very similar to what we do with Stacks and Buildpacks at Heroku. To better understand the Docker approach, I created a language stack for Mono, the open source implementation of Microsoft’s .NET Framework.

UPDATE: There’s now a proper official Docker/Mono language stack, I recommend using that.

How to use

A working Docker installation is required for this section.

To turn a .NET app into a runnable Docker image, first add a Dockerfile to your app source root. The sample below assumes a simple console app with an output executable name of TestingConsoleApp.exe:

FROM friism/mono:3.10.0-onbuild
CMD [ "mono", "./TestingConsoleApp.exe" ]

Now build the image:

docker build -t my-app .

The friism/mono images are available in the public Docker Registry and your Docker client will fetch them from there. Docker will then execute the onbuild instructions to restore NuGet packages required by the app and use xbuild (the Mono equivalent of msbuild) to compile source code into executables and libraries.

The Docker image with your app is now ready to run:

docker run my-app

If you don’t have an app to test with, you can experiment with this console test app.

Notes

The way Docker languages stacks are split into a base image (that declares system dependencies) and an onbuild Dockerfile (that composes the actual app to be run) is perfect. It allows each language to get just the system libraries and dependencies needed. In contrast, Heroku has only one stack image (in several versions, reflecting underlying Linux distribution versions) that all language buildpacks share. That stack is at once both too thick and too thin: It includes a broad assortment of libraries to make supported languages work, but most buildpack maintainers still have to hand-build dependencies and vendor in the binaries when apps are built.

Docker has no notion of a cache for ONBUILD commands whereas the Heroku buildpack API has a cache interface. No caching makes the Docker stack maintainer’s life easier, but it also makes builds much slower than what’s possible on Heroku. For example, Heroku buildpacks can cache the result of running bundle install (in the case of Ruby) or nuget restore (for Mono), greatly speeding up builds after the first one.

Versioning is another interesting difference. Heroku buildpacks bake support for all supported language versions into a single monolithic release. What language version to use is generally specified by the app being built in a requirements.txt (or similar) file and the buildpack uses that to install the correct packages.

Docker language stacks, on the other hand, support versioning with version tags. The app chooses what stack version to use with the FROM instruction in the Dockerfile that’s added to the app. Stack versions map to versions of the underlying language or framework (eg. FROM python:3-onbuild gets you Python 3). This approach lets the Python stack, for example, compile Python 2 and 3 apps in different ways without having a bunch of branching logic in the onbuild Dockerfile. On the other hand, pushing an update to all Python stack versions becomes more work because the tags have to be updated individually. There are tradeoffs in both the Docker and Heroku buildpack approaches, I don’t know which is best.

Docker maintains a free, automated build service that churns out hosted Docker images for everyone to use. For my Mono stack, Docker Hub pulls updates from the GitHub repo with the Dockerfiles and builds the relevant tags into images. This is very convenient for stack maintainers. Heroku has no hosted service for building buildpack binaries, although I have documented a (Docker-based) approach to scripting this work.

(Note that, while Heroku buildpacks are wildly successful, it’s an older standard that predates Docker by many years. If it seems like Docker has gotten more things right, it’s probably because that project was informed by Heroku’s experience and by the passage of time).

Finally, and unrelated to Docker and Heroku, the Mono Project now has an APT package repository. This is pretty awesome, and I sincerely hope that the days of having to compile Mono from source are behind us. I don’t know if the repo is quite stable yet (I had to download a key without using SSL, the mono-devel package is versioned 3.10.0-0xamarin1 and the package fails to declare a dependency on udev), but it made the Mono Docker stack image a lot simpler. Check out the diff going from 3.8.0 (compiled from source) to 3.10.0 (installed from APT repo).

Building Heroku buildpack binaries with Docker

This post covers how binaries are created for the Heroku Mono buildpack, in particular the Mono runtime and XSP/fastcgi-mono-server components that are vendored into app slugs. Binaries were previously built using a custom buildpack with the build running in a Heroku dyno. That took a while though and was error-prone and hard to debug, so I have migrated the build scripts to run inside containers set up with Docker. Some advantages of this approach are:

  • It’s fast, builds can run on my powerful laptop
  • It’s easy to get clean Ubuntu Lucid (10.04) image for reproducible clean-slate builds
  • Debugging is easy too, using an interactive Docker session

The two projects used to build Mono and XSP are on GitHub. Building is a two-step process: First, we do a one-time Docker build to create an image with the bare-minimum (eg. curl, gcc, make) required to perform Mono/XSP builds. s3gof3r is also added – it’s used a the end of the second step to upload the finished binaries to S3 where buildpacks can access them.

The finished Docker images have build scripts and can now be used to run builds at will. Since changes are not persisted to the Docker images each new build happens from a clean slate. This greatly improves consistency and reproducibility. The general phases in the second build step are:

  1. Get source code to build
  2. Run configure or autogen.sh
  3. Run make and make install
  4. Package up result and upload to S3

Notice that the build scripts specify the /app filesystem location as the installation location. This is the filesystem location where slugs are mounted in Heroku dynos (you can check yourself by running heroku run pwd). This may or not be a problem for your buildpack, but Mono framework builds are not path-relative, and expect to eventually be run out of where they were configured to be installed. So during the build, we have to use a configure setting consistent with how Heroku runs slugs.

The build scripts are parametrized to take AWS credentials (for the S3 upload) and the version to build. Building Mono 3.2.8 is done like this:

$ docker run -v ${PWD}/cache:/var/cache -e AWS_ACCESS_KEY_ID=key -e AWS_SECRET_ACCESS_KEY=secret -e VERSION=3.2.8 friism/mono-builder

It’s now trivial to quickly create and upload to S3 binaries for all the versions of Mono I want to support. It’s very easy to experiment with different settings and get binaries that are small (so slugs end up being smaller), fast and free of bugs.

There’s one trick in the docker run command: -v ${PWD}/cache:/var/cache. This mounts the cache folder from the current host directory in the container. The build scripts use that location to store and cache the different versions of downloaded source code. With the source code cache, turnaround times when tweaking build settings are even shorter because I don’t have to wait for the source to re-download on each run.

Running .NET apps on Docker

This blog post covers running simple .NET apps in Docker lightweight containers using Mono. I run Docker in a Vagrant/VirtualBox VM on Windows. This works great and is fast. Installation instructions are available on the Docker site.

Building the base image

First order of business is to create a Docker image that has Mono installed. We will use this as the base image for containers that actually run apps. To get the most recent Mono version (3.2.6 at the time of writing) I use packages created by Timotheus Pokorra installed on a Ubuntu 12.04 LTS Docker image. Here’s the Dockerfile for that:

FROM ubuntu:12.04
MAINTAINER friism

RUN apt-get -y -q install wget
RUN wget -q http://download.opensuse.org/repositories/home:tpokorra:mono/xUbuntu_12.04/Release.key -O- | apt-key add -
RUN apt-get remove -y --auto-remove wget
RUN sh -c "echo 'deb http://download.opensuse.org/repositories/home:/tpokorra:/mono/xUbuntu_12.04/ /' >> /etc/apt/sources.list.d/mono-opt.list"
RUN apt-get -q update
RUN apt-get -y -q install mono-opt

Here’s what’s going on:

  1. Install wget
  2. Add repository key to apt-get
  3. remove wget
  4. Add openSUSE repository to sources list
  5. Install Mono from there

At first I did all of the above in one command since these steps represent the single logical step of installing Mono and it seems like they should be just one commit. Nobody will be interested in the commit after wget was installed, for example. I ended up splitting it up into separate RUN commands since that’s what other people seem to do.
With that Dockerfile, we can build an image:

$ docker build -t friism/mono .

We can then run a container using the generated image and check our Mono installation:

vagrant@precise64:~/mono$ docker run -i -t friism/mono bash
root@0bdca65e6e8e:/# /opt/mono/bin/mono --version
Mono JIT compiler version 3.2.6 (tarball Sat Jan 18 16:48:05 UTC 2014)

Note that Mono is installed in /opt and that it works!

Running a console app

First, we’ll deploy a very simple console app:

using System;

namespace HelloWorld
{
	public class Program
	{
		static void Main(string[] args)
		{
			Console.WriteLine("Hello World");
		}
	}
}

For the purpose of this example, we’ll pre-build apps using the VS command prompt and msbuild and then add the output to the container:

msbuild /property:OutDir=C:\tmp\helloworld HelloWorld.sln

Alternatively, we could have invoked xbuild or gmcs from within the container.

The Dockerfile for the container to run the app is extremely simple:

FROM friism/mono
MAINTAINER friism

ADD app/ .
CMD /opt/mono/bin/mono `ls *.exe | head -1`

Note that it relies on the friism/mono image created above. It also expects the compiled app to be in the /app folder, so:

$ ls app
HelloWorld.exe

The CMD will simply use mono to run the first executable found in the build output. Let’s build and run it:

$ docker build -t friism/helloworld-container .
...
$ docker run friism/helloworld-container
Hello World

It worked!

Web app

Running a self-hosting OWIN web app is only slightly more work. For this example I used the sample code from the OWIN/Katana-on-Heroku post.

$ ls app/
HelloWorldWeb.exe  Microsoft.Owin.Diagnostics.dll  Microsoft.Owin.dll  Microsoft.Owin.Host.HttpListener.dll  Microsoft.Owin.Hosting.dll  Owin.dll

The Dockerfile for this exposes port 5000 from the container, and CMD is used to start the web app and specify port 5000 to listen on:

FROM friism/mono
MAINTAINER friism

ADD app/ .
EXPOSE 5000
CMD ["/opt/mono/bin/mono", "HelloWorldWeb.exe", "5000"]

We start the container and map port 5000 to port 80 on the machine running Docker:

$ docker run -p 80:5000 -t friism/mono-hello-world-web

And with that we can visit the OWIN sample site on http://localhost/.

If you’re a .NET developer this post will hopefully have helped you place Docker in the context of your everyday work. If you’re interested in more ideas on how to use Docker to deploy apps, check out this Automated deployment with Docker – lessons learnt post.

Heroku .NET buildpack now with nginx

Another weekend, and another bunch of updates to the .NET buildpack. Most significantly, ASP.NET apps now run with  fastcgi-mono-server fronted by an nginx instance in each dyno. This replaces the previous setup which used the XSP development web server. Using nginx is supposedly more production ready.

Other changes include setting the LD_LIBRARY_PATH environment variable and priming the Mono certificate store. This will make apps that use RestSharp, or otherwise exposes Mono’s reliance on native zlib, work out of the box. It also makes calling HTTPS web services form Heroku not fail.

These improvements came about because people tested their .NET apps on Heroku. Please do the same so that we can weed out as many bugs as possible. If you want to contribute, feel free to check out the TODO-list in the README on the GitHub.