How to Integrate Containers in OpenStack
One of the key features of the OpenStack platform is the ability to run applications, and quickly scale them, using containers.
Containers are ready-to-run applications because they come packed with the entire stack of services required to run them.
OpenStack is an ideal platform for containers because it provides all of the resources and services for containers to run in a distributed, massively scalable cloud infrastructure. You can easily run containers on top of Nova because it includes everything that is needed to run instances in a cloud. A further development is offered by project Zun.
In more complex environments, container orchestration is often required. Using container orchestration makes managing many containers in data center environments easier. Kubernetes has become the preferred solution for container orchestration. Container orchestration in OpenStack is implemented using project Magnum.
In the current OpenStack, there are no less than three solutions for running containers:
Directly on top of Nova
Using container orchestration in project Magnum
Using project Zun
In this tutorial, I’ll show you how to run containers in OpenStack using the Nova driver with Docker.
What is Docker?
Multiple solutions are available for running containers on cloud infrastructure. Currently, Docker is the most used solution for containers. It offers all that is needed to run containers in a corporate environment, and is backed by Docker Inc. for support.
Docker has many advantages. Its containers are portable as images and can be assembled from an application source code. File system level changes can also easily be managed. And
Docker can collect STDIN and STDOUT of processes running in a container, which allows for interactive management of containers.
The Nova driver embeds an HTTP client which talks with the Docker internal REST API through a UNIX socket. The HTTP API is used to control containers and fetch information about them.
The driver fetches images from OpenStack’s Glance service and loads them into the Docker file system. From Docker, container images may be placed in Glance to make them available to OpenStack.
Enabling Docker in OpenStack
Now that you have a general sense of how containers work in OpenStack, let’s talk about how you can enable containers using the Nova driver for Docker. The OpenStack Wiki has a detailed explanation of how to configure any OpenStack installation to enable Docker. You can also use your distribution’s deployment mechanism to deploy Docker.
When you do this, the Docker driver will be added to the nova.conf file. And the Docker container format will be added to glance.conf.
Once it’s enabled, Docker images can be added to the Glance repository, using the CLI command docker save. The below commands show how you can first pull a Docker image, next save it to the local machine using docker save, after which a glance image can be created from it, using the docker container format.
$ docker pull samalba/hipache; $ docker save samalba/hipache | glance image-create --is-public=True --container-format=docker --disk-format=raw --name samalba/hipache
Booting from a Docker image
Finally, once Docker is enabled in Nova, you can boot an OpenStack instance from a Docker image.Just add the image to the Glance repository, and then you’ll be able to boot from it.This works like booting any other instance from a nova environment.
$ nova boot --image "samalba/hipache" --flavor m1.tiny test
After booting, you’ll see the docker instance in the openstack environment using either nova list as well as docker ps.
In this short tutorial series on OpenStack, we’ve covered how to install a distribution, get an instance up and running, and enable containers in just a few hours.
Read the other articles in the series:
Interested in learning more OpenStack fundamentals? Check out the self-paced, online Essentials of OpenStack Administration course from The Linux Foundation Training. The course is excellent preparation for the Certified OpenStack Administrator exam.