Docker is a Linux technology that is changing how many developers view running applications in a safe isolated way and which has now been adopted by Red Hat. Hardware-emulating virtualisation has ruled the cloud, but Docker-managed lightweight containers are now coming to the fore.
Although Docker's container management has been limited to Ubuntu until now, this is changing. Docker has announced that it will be working with Red Hat to package Docker 0.7, the next version, for Fedora and together they are setting out to eventually support Docker with Red Hat Enterprise Linux. Docker's creators, dotCloud, are also working with Red Hat to integrate Docker with Red Hat's PaaS platform,OpenShift, getting libvirt support for Docker and more. But what does Docker bring to the Platform as a Service cloud?
At its heart, Docker simplifies managing Linux Containers. Although Linux has a number of ways to create containers – lightweight isolated Linux systems that run on the host without hardware virtualisation, contained and managed by the kernel – in their raw form they are usually not as simple as they could be to get working. Containers offer a more resource-efficient way of deploying and running multiple applications, because they share the Linux kernel between safely sandboxed virtual systems rather than using a virtual machine emulating hardware and running its own full Linux kernel and user space code. Taking LXC and Linux Containers as an example, to use them without application assistance, a user has to create the container, create a root filesystem, wire up virtual networking and then load up their application into the container. It can be a lot of work and it is not easy to reliably reproduce at scale.
AUFS Filesystem at Its Core
This is where Docker steps in to take on the challenge of loading containers so that they can be as convenient to use as their namesake in the cargo business. It's that same convenience that has put Docker at the core of a number of new open source projects for orchestrating clouds of containerised systems and eschewing the weightiness of the more traditional virtualised cloud platforms.
So what makes Docker convenient? At the core is its use of AUFS, Another Union File System, which allows for an underlying read-only filesystem to be overlaid with a read-write filesystem that saves only the things changed for the underlying filesystem. This is all presented to the user and operating system as a single filesystem. In Docker, these filesystems are referred to as images and a selection of images can be stacked up to provide a container's contents.
A user who wants to create a container based on Ubuntu would run
docker pull ubuntu and this would download a set of images for Ubuntu. Once pulled, the images are held locally. These images are, by default, obtained from the other major element of Docker, the central repository at index.docker.io. This is provided as a public service by Docker's creators, dotCloud. Docker users can search it for images which will fit their needs. For example, CentOS and BusyBox are among the popular images on the repository.
When Docker is asked to run a command the user specifies the image they want to run that command in. The command
docker run ubuntu ls would create a container and attach the ubuntu images as read-only and create a read-write image over that. If the command made changes though, for example installing the Apache server with a command like
docker run ubuntu sudo apt-get -y install apache, those changes happen to the read-write image. A user can also create a terminal session with the command
docker run -t -i ubuntu /bin/bash and then run any commands within the container to create an environment ready to run an application.
If the command runs to completion and exits, the container will become stopped. If it launched servers or long running commands it will start running. The command "docker ps -a" will list IDs of all the containers, running and stopped, and any one of these images can be committed, with the
docker commitcommand, to the local image store to preserve it and allow it to be recalled easily. They can also be pushed back up to the index.docker.io repository for others to use; index.docker.io only holds public images, but it is possible to build a private repository server for an organisation.
Scripts for Running an Image
These are all useful features but there is still a lot of manual intervention in creating and running an image in a new container. That's where one other feature of Docker comes to the fore, Dockerfiles. These are "scripts" which allow the entire process of creating an image to be encapsulated in a single file, including selecting a base image, setting metadata and environment variables, executing commands, configuring networking and file access, exposing ports for servers and setting what command should be run to start the application in the container. A Dockerfile can be run with the
docker build command which will result in a locally saved image and a
docker run command can get it up and running.
This text-file-driven simplicity, combined with a REST API behind the scenes, has made Docker ideal to use for running applications in systems orchestrated over multiple machines. For example, Dokku is a "mini-Heroku" PaaS (Platform-as-a-Service) which uses Docker to run applications written in various languages within their own containers and, in turn, it has inspired projects such as Flynn, a more extensive PaaS. The Erlang cloud Voxoz also uses Docker as part of its deployment formula. Docker also finds uses in providing a quick way to deploy applications such as the open source CI platform Strider which distributes Strider as either a DockerFile or an image of its software.
For existing PaaS platforms such as Red Hat's OpenShift, Docker will offer a new level of virtualisation within the cloud with application-level portability, rather than just emulated virtual machine portability. In the process, Docker will also be losing its dependency on AUFS and gaining support for Red Hat and Linux's thin-provisioning technologies. That change will make Docker available to even more Linux distributions and able to integrate and change even more clouds.