Blog

Generating Deployable Quepid Artifacts with Containers

This is part 1 of a 3 part series on our move to containers as a platform for Quepid.

As Quepid has matured OSC has gone through an evolution of operational tools to run the platform. Quepid’s migration to Rails prompted an exploration of deployment methods for our new stack. Under the Python Flask application we would log in to the appropriate environment’s application server and run a script. It would perform a git pull, run migrations, and restart the application process. There are deployment tools for Python such as Fabric, but this workflow worked well for us given the number of hosts and team size. With our additional developers and the switch to Ruby we have Capistrano which like Fabric handles code deployment and execution of remote tasks. What about taking a different tack. Let’s dive into containers, a modular way to encapsulate deployable artifacts.

Portable CatContainers are interesting, they contain all components required to run the application. This includes the language runtime, its dependencies, our application code, and any configuration files. By encapsulating the entire application stack we get powerful guarantees. If I build a container on my machine and it runs, I am certain that it will run on anyone elses machine. We don’t have to quibble over Ruby or gem dependencies. Everything needed to run is there in the container. Instead of sending that container to a team member, let’s instead ship it out to the staging environment. There is no need to investigate whether a dependency is present, we push the container and run it. Deployment is a breeze! If production experiences an issue we can pull down that specific version, run the application, or poke around inside the container to investigate.

Containers run in isolation. We can run multiple instances of the same container (or different versions!) on the same host. Each container receives it’s own isolated disk and network space. Linking containers together with shared resources is also possible. We could have a containerized application writing out logs to a directory, then a separate containerized process which pulls the logs from a shared directory and ships them off to a log service. In this approach the containers are each responsible for one task. The log collection container could even be paired with other services. Think of containers as following the single responsibility principle.

Ok, so containers are awesome where do I get started? We looked around at a few different technologies in the space LXC, Rkt (pronounced Rocket), and Docker. At the time LXC was the most mature, but had a steep learning curve. One of our goals was not to require a ton of training or tooling around our eventual solution. Rkt was the new kid on the block. Some major companies invested in it’s technology and container specification. It also has a fun feature around cryptographically signed containers. Unfortunately at the time it was very young with interfaces and formats still in flux. It’s worth noting that it has since hit version 1.0 since our exploration and deserves another look. That leaves us with Docker, a container engine with a massive following. It’s containers are built off a manifest, called Dockerfile, that is included with your application.

DockerDocker seems like a solid choice, how does one bootstrap on it? Our Quepid developers run OS X or Linux machines. Docker is linux native, those developers didn’t have any special hoops beyond installing the package from their respective package managers. The OS X devs had a few hoops to jump through. Today there is a wonderful Docker Toolbox suite of tools to setup a complete environment. When we were investigating this last year it didn’t exist. On our machines we installed boot2docker a tool for launching a virtualbox based virtual machine with the docker daemon running and the docker CLI tools. This VM is pretty slim since all other dependencies are wrapped up in the container. Another option worth exploring is the CoreOS Vagrant image. It runs a VM just like boot2docker, but also provides some additional features and supports VMWare as well as Virtual Box. In both of these cases docker CLI commands are forwarded to the VM.

Now that docker is running, building a new container is a simple call to docker build, the Dockerfile is processed line by line with the final container being composed of layers representing the disk state after each command is executed. This is pretty powerful as layers may be reused if a previous layer has not been changed. For instance if I install Ruby in the first layer, copy our application in, then precompile our frontend assets the likelihood that the Ruby layer will be regenerated is small. We can forcibly bust the cache, but really we want fast build times.

All Dockerfile’s start with a base defined with the FROM command. In some cases the base is an empty layer, in others it may already contain build tools, language runtimes, and other packages. An example being Phusion’s Passenger-docker which includes everything required to run a Ruby, Python, or Node app with their application server passenger. It’s a great starting point for beginners and solves a lot of the stack early on in the Dockerfile. We started here and ended up taking a different approach. Instead of including all of those extra pieces need it made more sense to start with an slimmer base. In our case we went with CentOS. This provides a bunch of basic commands and the yum package manager.

Next we install additional packages our Rails application needs. This includes Ruby, NodeJS, and MariaDB libraries. These packages are installed with the RUN command which executes yum install commands. The next big step is to COPY the application into the container. This copies in all of the code and assets to the specified path within the container’s filesystem. Finally we RUN our “install” rake tasks. These include pulling in bower dependencies, installing gems with bundler, and pre-compiling assets.

With all of these steps listed out in the Dockerfile we run docker build -t quepid .. This pushes the build context out to our VM where it runs through each layer. At this point we iterated quite a bit. Testing different layering strategies, do we install a bunch of packages together or each one individually? It all depends on your environment and team. Eventually a container is produced. Let’s think of the container as an environment where commands may be executed. The command could be to start the application server or possibly execute a rake task. We could even tell it to run /bin/bash and explore the container in our terminal. In our Dockerfile we hint at how the container should be run as an application with the CMD entry. Now when we call docker run it knows what to execute within the container. There are many more commands supported within a Dockerfile, be sure to explore some other other features available.

There is beauty in the simple Docker commands and Dockerfile. We iterated on this to the point where our Continuous Integration system builds our containers and ships them to a registry for the containers. The CI system may also start the container and verify it spins-up and serves requests. At that point it may run integration tests against the container. Thus further ensuring that the artifact to be deployed meets requirements.

Now our Quepid Rails application is contained. All dependencies required to run are packaged together in a way that makes it easy to run and ship around. We have guarantees around component versions and isolation. Join us in the next installment, Shipping Containers, where we explore running containers outside of a developer’s machine and in a real environment. We discuss scheduling and tooling around containers and their execution. If you’re interested in exploring containers in your infrastructure Get in touch! We’ll be happy to explore your use case with our team of operations experts.