Aah, yes! Finally we start to get somewhere exciting! If you went through my last post, and perhaps studied a little bit on your own, you probably have the hang of the basics of Docker Engine CLI by now. This time we'll look in to Dockers proprietary tool for creating multi container apps called Docker Compose. My multi container app in this exercises example we'll include a HAProxy loadbalancer, a WordPress front-end and a MariaDB back-end.

Before we get to work we'll need our tools though, so first things first, let's install Compose.

Installing Compose

Installing Compose is really straight forward, either:

  • Curlit:

$ curl -L https://github.com/docker/compose/releases/download/latest-version/docker-compose > /usr/local/bin/docker-compose and make it executable: $ chmod +x /usr/local/bin/docker-compose.

  • Install it via PIP (preferred way):

$ pip install docker-compose.

You can read the official Compose installation manual here.

Creating the multi container app

Compose uses YAML files to compose the multi container apps. You'll spend most of your time fine tuning these YAML files. A really basic one will look something like this:

yaml file

The containers here are referred to as "services". You'll want to get familiar with that term, as all the container orchestration tools like Kubernetes, Mesos or Docker Swarm talk about services rather than containers. I won't go in to too much detail about the YAML file syntax. For that, you can read the official Docker Compose reference.

Looking at my YAML file, I'm not that convinced that I actually did anything special here besides added the load balancer and SSL termination to it.

I didn't go for the vanilla image from the official HAProxy repository, but chose a pre-configured one instead. I highly highly highly suggest you to read about the usage here. Still, there are some pit falls that I want to discuss about so you don't have to bang your head to the wall as I did.

  • You don't want to expose the docker.sock to the world with read and write permissions. Read-only permissions will do, although, having the socket exposed at all is questionable too.
  • You'll want to have some chronological order in which the services boot up. That's why I've used the debends_on: key. This makes it so, that HAProxy will wait for WordPress service to spin up before spinning up itself.
  • Do not rename your containers with container_name key. Just don't use it here.
  • Services that you bring to load balancing must be span up in the correct default network. Docker Compose generates a unique <service name>_default network for each composed multi container app.
  • If you want HTTP to HTTPS redirection, add the HTTP URL to the VIRTUAL_HOST environment variable in the service definition like so: http://localhost, https://localhost.
  • Add FORCE_SSL=yes as an environment variable to the service.
  • Passing the certificate to the load balancer as a PEM file is much cleaner (IMO) than to copy paste the content. If you do, you'll have to denote the certificate folder as a environment variable to the load balancer.
  • Pass the PEM file as cert<1..n>.pem as otherwise it'll be used for the default certificate for the whole load balancer. Haven't tried out wildcard certificates, but I do suspect that it's better to have different certificates for different services.

Starting and scaling the services

Now that we're fairly sure our multi container app is water proof, let's boot it up with: docker-compose up. If you get any fuss from bash, use the -f flag or cd your way in to the same folder with the YAML file. Not using the -d flag at first is a nice way to debug straight from the get go, but who would make mistakes - right? Reference on docker-compose commands here.

Pretty cool huh? Definitely pretty awesome, but the real magic starts with scaling. That's why we included the load balancer in the first place! Now, Ctrl+C your way out of the "session" and compose up with the -d flag. I've included the whoami image in the YAML file to demonstrate round robin load balancing in action. Scale the whoami service with: $ docker-compose scale whoami=3. This launches two new containers more and links them to the whoami service. Open your browser and navigate to http://localhost. Refresh the page and you'll notice that the container ID changes. Yey! Load balancing works!


Use $ docker-compose scale whoami=1 to scale back down again. Finally, take all services down with $ docker-compose down. Now that, is really kick ass!


Next time we will launch our multi container app to the cloud. I've chosen Kubernetes (K8S) for the orchestration service, and I'll probably choose Google Cloud Engine for the cloud service, even though I was pretty convinced about Microsoft Azure in the beginning. K8S is a Google product, so I can feel synergy already. Hopefully you don't mind the change of hearts :).

..Hey...wait a minute!

I had a MariaDB as a container service back-end and I didn't mention anything about it. That's because I'm not proud of how I'm handling the stateful nature of MariaDB. I'm just mapping a host folder to the container to be used as a data base folder with the volumes: key. This is the "hack-job" way of solving the statefulness issue, but it works. Using a cloud providers blob storage, for example, is a valid solution in my view to have a fault tolerant data storage that survives container destruction. You can test this with our demo app. Go ahead and install WordPress. Then, take the whole app down and spin it up again. You'll notice that WP will start up already installed.

There are many ways to hande stateful apps in containers, but I think Flocker is perhaps the most elegant solution. Flockers data sets follow containers for the container life cycle and can be moved between containers and even hosts. Interested? Read more about Flocker.