How to Use Docker Machine to Create a Swarm Cluster
Along with Docker Compose, Docker Machine is one of the tools that helps developers get started with Docker. Specifically, Machine allows Windows and OS X users to create a remote Docker host within a cloud provider infrastructure (e.g., Amazon AWS, Google Container Engine, Azure, DigitalOcean). With the Docker client installed on your local machine, you will be able to talk to the remote Docker API and feel like you have a local Docker Engine running. Machine is a single binary that you can install on your local host and then use it to create a remote Docker host and even a local Docker host using VirtualBox. The source code is hosted on GitHub.
To begin, you need to install Docker Machine. The official documentation is very good and, in these first steps, I’ll show a summary of the commands highlighted in the documentation.
Install Docker Machine
Similar to Docker Compose and other Docker tools, you can grab the Machine binary from the GitHub releases. You could also compile it from source yourself or install the Docker Toolbox, which packages all Docker tools into a single UI-driven install.
For example on OS X, you can grab the binary from GitHub, store it in `/usr/local/bin/docker-machine`, make it executable, and test that it all worked by checking the Docker Machine version. Do this:
$ sudo curl -L https://github.com/docker/machine/releases/download/v0.6.0/ docker-machine-`uname -s`-`uname -m` > /usr/local/bin/docker-machine $ sudo chmod +x /usr/local/bin/docker-machine $ docker-machine version docker-machine version 0.6.0, build e27fb87
Because Machine will create an instance in the cloud, install the Docker Engine in it, and set up the TLS authentication between the client and the Engine properly, you will need to make sure you have a local Docker client as well. If you do not have it yet, on OS X, you can get it via homebrew.
$ brew install docker
You are now ready to create your first Docker machine.
Using Docker Machine
As I mentioned previously, you can use Machine to start an instance in a public cloud provider. But, you can also use it to start a VirtualBox virtual machine, install the Docker Engine in it, and get your local client to talk to it as if everything were running locally. Let’s try the VirtualBox driver first before diving into using a public cloud.
The `docker-machine` binary has a `create` command to which you pass a driver name and then specify the name of the machine you want to create. If you have not started any machines yet, use the `default` name. The next command-line snippet shows you how:
$ docker-machine create -d virtualbox default Running pre-create checks... <snip> Docker is up and running! To see how to connect your Docker Client to the Docker Engine running on this virtual machine, run: docker-machine env default
Once the machine has been created, you should see a VM running in your VirtualBox installation with the name default. To configure your local Docker client to use this machine, use the `env` command like this:
$ docker-machine env default export DOCKER_TLS_VERIFY="1" export DOCKER_HOST="tcp://192.168.99.102:2376" export DOCKER_CERT_PATH="/Users/sebastiengoasguen/.docker/machine/machines/default" export DOCKER_MACHINE_NAME="default" # Run this command to configure your shell: # eval $(docker-machine env default) $ eval $(docker-machine env default) $ docker ps
Of course, your IP might be different and the path to the certificate will be different. What this does is to set some environment variables that your Docker client will use to communicate with the remote Docker API running in the machine. Once this set, you will have access to Docker on your local host.
At this stage, you can start containerizing on OSX or Windows.
Machine goes further. Indeed, the same commands can be used to start a Docker Machine on your favorite cloud provider and can be used to start a cluster of Docker hosts. Each driver has its own reference well documented. Once you have selected a provider, make sure to check the reference to learn how to set up a few key variables -- like access and secret keys or access tokens. These can be set as environment variables or passed to the `docker-machine` commands.
Creating a Swarm Cluster with Docker Machine
The very important point is that so far we have only used a single Docker host. If we really want to run a distributed application and scale it, we will need access to a cluster of Docker hosts so that containers can get started on multiple hosts. In Docker speak, a cluster of Docker hosts is called a Swarm. Thankfully, Docker Machine lets you create a Swarm. Note that you could also create a Swarm with well-known configuration management tools (e.g., Ansible, Chef, Puppet) or other tools, such as Terraform.
In this section, I will dive straight into a more advanced setup in order to take advantage of a network overlay in our Swarm. Creating an overlay will allow our containers to talk to each other on the private subnet they get started in.
To be able to use a network overlay, we start the Swarm using an separate key-value store. Several key-value store back ends are supported, but here we are going to use Consul. Typically, we will create a Docker Machine and run consul as a container on that machine, exposing the ports to the hosts. Additional nodes will be started with Docker Machine (i.e., one master and a couple workers). Each of these will be able to reach the key-value store, which will help in bootstrapping the cluster and managing the network overlays. For a simpler setup, you can refer to this guide, which does not use overlays.
Let’s get started and create a Machine on DigitalOcean. You will need to have an access token set up, and don’t forget to check the cloud providers references. The only difference with VirtualBox is the name of the driver. Once the machine is running and the environment is set, you can create a consul container on that host. Here are the main steps:
$ docker-machine create -d digitalocean kvstore $ eval $(docker-machine env kvstore) $ docker run --name consul --restart=always -p 8400:8400 -p 8500:8500 \ -p 53:53/udp -d progrium/consul -server -bootstrap-expect 1 -ui-dir /ui
Now that our key-value store is running, we are ready to create a Swarm master node. Again, we can use Docker Machine for this, using the `--swarm` and `swarm-master` options. This advanced setup makes use of key-value store via the `--engine-opt` options. This configures the Docker Engine to use the key-value store we created.
$ docker-machine create -d digitalocean --swarm --swarm-master \ --swarm-discovery="consul://$(docker-machine ip kvstore):8500" \ --engine-opt="cluster-store=consul://$(docker-machine ip kvstore):8500" \ --engine-opt="cluster-advertise=eth0:2376" swarm-master
Once the Swarm master is running, you can add as many worker nodes as you want. For example, here is one. Note that the `--swarm-master` option is removed.
$ docker-machine create -d digitalocean --swarm \ --swarm-discovery="consul://$(docker-machine ip kvstore):8500" \ --engine-opt="cluster-store=consul://$(docker-machine ip kvstore):8500" \ --engine-opt="cluster-advertise=eth0:2376" swarm-node-1
And that’s it, you now have a cluster of Docker hosts running on DigitalOcean. Check the output of `docker-machine ls`. You should see your default machine running in VirtualBox and several other machines, including your key-value store, your Swarm master, and the nodes you created.
$ docker-machine ls NAME ACTIVE DRIVER URL SWARM DOCKER default - virtualbox ... v1.10.3 kvstore - digitalocean ... v1.10.3 swarm-master * (swarm) digitalocean ... swarm-master (master) v1.10.3 swarm-node-1 - digitalocean ... swarm-master v1.10.3
What is very useful with Machine, is that you can easily switch between the machines you started. This helps with testing locally and deploying in the cloud.
Using your Cluster or your Local Install
The active Docker Machine -- the one with a start in the `docker-machine ls` output -- is shell dependent. This means if you open two terminals, and in one you set your default VirtualBox-based machine to be the active one, and in the other you point to your Swarm master, you will be able to switch Docker endpoints by just switching terminals. That way you can test the same Docker Compose file locally and on a Swarm.
To point to your Swarm, the syntax is slightly different than for a single host. You need to pass the `--swarm` option to the `docker env` command like so:
$ docker-machine env --swarm swarm-master $ eval $(docker-machine env --swarm swarm-master)
Check that your cluster has been properly set up with `docker info`. You should see your master nodes and all the workers that you have started. For example:
$ docker info <snip..> Nodes: 2 swarm-master: 22.214.171.124:2376 └ Status: Healthy └ Containers: 2 └ Reserved CPUs: 0 / 1 └ Reserved Memory: 0 B / 513.4 MiB └ Labels: executiondriver=native-0.2, kernelversion=4.2.0-27-generic, operatingsystem=Ubuntu 15.10, provider=digitalocean, storagedriver=aufs └ Error: (none) └ UpdatedAt: 2016-03-20T17:59:15Z swarm-node-1: 126.96.36.199:2376 └ Status: Healthy └ Containers: 1 └ Reserved CPUs: 0 / 1 └ Reserved Memory: 0 B / 513.4 MiB └ Labels: executiondriver=native-0.2, kernelversion=4.2.0-27-generic, operatingsystem=Ubuntu <snip…>
In this example, I used a Consul based key-value store for discovery mechanism. You might want to use a different mechanism. Now that we have a Swarm at hand, we are ready to start containers on it.
To make sure that our containers can talk to each other regardless of the node they start on, in the next article, I will show how to use an overlay network. Overlay networks in Docker are based on libnetwork and are now part of the Docker Engine. Check the libnetwork project if you want to learn more.