Using Docker Swarm to Create an Overlay Network

2828

In a previous article, we discussed Docker Machine, a tool to create Docker hosts in the cloud. Docker Machine can be extremely handy for local testing if you are on Windows and OS X, but it also adds another dimension when you use it to start Docker hosts in your favorite cloud provider and/or create a cluster.

We then used Machine to go straight into an advanced subject and create a Docker Swarm, which is a cluster of Docker hosts. A cluster of Docker hosts is needed to run a truly distributed application in production. In this article, we will look at setting up networking for a Swarm to allow containers to communicate with each other across hosts. Indeed, a Swarm cluster allows us to use the native single host networking of Docker, but it also allows us to create a network overlay backed by VXLAN. Containers started in this overlay can communicate out of the box with each other. This article will show you how to create, use, and test an overlay network using Docker Swarm.

Creating an Overlay Network

After you’ve set up your Swarm, you could start using it right away and start containers in the way that you are accustomed to. Docker will automatically use what is called a bridge network. Although this is good and you can expose service on your hosts, it complicates networking between containers that are started on multiple hosts as you would have to bind services to the hosts and let the containers know where to find each other.

Libnetwork provides a network overlay that can be used by your containers so that they appear on the same subnet. The huge bonus is that they can reach each other and resolve each other’s DNS names, making service discovery a breeze.

With your Swarm master or worker nodes as your active Docker Machine, you can create an overlay network with the `docker network create` command, like so:


$ docker network create foobar
165e9c2bafab44513da2f26426216217dc69ca2cd021f966ccc64e7c6bf898d9

You can list the networks available to you. You will see multiple networks. Each host will have a `bridge`, a `host`, and a `none` network. These three network types are also available on a single Docker Engine setup, with the bridge network being the default setup. Our `foobar` overlay created above appears in this list and is global to all the hosts in our Swarm.


$ docker network ls
NETWORK ID          NAME                  DRIVER
2c48d476867e        swarm-master/bridge   bridge              
0b6ae86378f3        swarm-master/none     null                
967c471c311c        swarm-master/host     host                
01f3d280bc68        swarm-node-1/bridge   bridge              
d0f929b000bc        swarm-node-1/none     null                
71550dff8c32        swarm-node-1/host     host                
165e9c2bafab        foobar                overlay

Using the Overlay Network

To use our overlay, we can start containers in the Swarm, giving them a name and specifying a `foobar` as our network like so:


$ docker run -d --name=foo --net=foobar nginx
$ docker run -d --name=bar --net=foobar nginx

When listing our containers, we will see which host they have been started on. You might have to dive a bit into Swarm scheduling strategies to understand how Swarm picks a host to run a container. It could be that Swarm schedules your two test containers on the same host. In the test below, we had two worker nodes, and Swarm scheduled our containers on both of them, spreading the containers in the cluster.


$ docker ps
CONTAINER ID    IMAGE   COMMAND                  CREATED         STATUS          PORTS             NAMES
21587d81505d    nginx   "nginx -g 'daemon off"   2 seconds ago   Up 2 seconds    80/tcp, 443/tcp   swarm-node-1/bar
6d66dc56af4f    nginx   "nginx -g 'daemon off"   9 seconds ago   Up 8 seconds    80/tcp, 443/tcp   swarm-node-2/foo

Testing the Overlay Network

This approach allows us to test our overlay networking. If all went well, they should be on the same overlay network even though they are on separate hosts. This means we should be able to `ping` each container using its name — which has been used in an embedded DNS registration. Let’s try it, using the `docker exec` command:


$ docker exec -ti swarm-node-1/bar ping -c 1 foo
PING bar (10.0.0.3): 56 data bytes
64 bytes from 10.0.0.3: icmp_seq=0 ttl=64 time=1.433 ms
--- bar ping statistics ---
1 packets transmitted, 1 packets received, 0% packet loss
round-trip min/avg/max/stddev = 1.433/1.433/1.433/0.000 ms

Indeed, we can ping the container named `foo` from the container named `bar` and we can also do the opposite:


$ docker exec -ti swarm-node-2/foo ping -c 1 bar
PING foo (10.0.0.2): 56 data bytes
64 bytes from 10.0.0.2: icmp_seq=0 ttl=64 time=0.984 ms
--- foo ping statistics ---
1 packets transmitted, 1 packets received, 0% packet loss
round-trip min/avg/max/stddev = 0.984/0.984/0.984/0.000 ms

And just like that you have multi-host networking in Docker. Containers started on different hosts by the Swarm scheduler can reach each other on their private IP thanks to an overlay network.

In a future post, we will get back to Docker Compose and see how we can take advantage of a Swarm and its overlay networks to create a truly distributed application where containers can be started on different networks for isolation and where we  can scale each service independently.