We’re learning about Kubernetes in this series, and why it is a good choice for managing your containerized applications. In part 1, we talked about what Kubernetes does, and its architecture. Now we’ll compare Kubernetes to competing container managers.
One Key Piece of the Puzzle
As we discussed in part 1, managing containers at scale and building a distributed applications infrastructure requires building a large complex infrastructure. You need a continuous integration pipeline and a cluster of physical servers. You need automated systems management for testing and verifying container images, launching and managing containers, performing rolling updates and rollbacks, network self-discovery, and mechanisms to manage persistent services in an ephemeral environment.
Kubernetes is just one piece of this puzzle. But it is a very important piece that manages several important tasks (Figure 1). It tracks the state of the cluster, creates and manages networking rules, controls which nodes your containers run in, and monitors the containers. It is an API server, scheduler, and a controller. That is why it is called “Production-Grade Container Orchestration,” because Kubernetes is like the conductor of a manic orchestra, with a large cast of players that constantly come and go.
Kubernetes is a mature and feature-rich solution for managing containerized applications. It is not the only container orchestrator, and there are four others that you might be familiar with.
Nomad from HashiCorp, the makers of Vagrant and Consul, schedules tasks defined in Jobs. It includes a Docker driver for defining a running container as a task.
Rancher is a container orchestrator-agnostic system that provides a single interface for managing applications. It supports Mesos, Swarm, Kubernetes, and its native system, Cattle.
Similarities with Mesos
At a high level, there is nothing different between Kubernetes and other clustering systems. A central manager exposes an API, a scheduler places the workloads on a set of nodes, and the state of the cluster is stored in a persistent layer.
For example, you could compare Kubernetes with Mesos, and you would see a lot of similarities. In Kubernetes, however, the persistence layer is implemented with etcd instead of Zookeeper for Mesos.
You could also consider systems like OpenStack and CloudStack. Think about what runs on their head node, and what runs on their worker nodes. How do they keep state? How do they handle networking? If you are familiar with those systems, Kubernetes will not seem that different. What really sets Kubernetes apart is its fault-tolerance, self-discovery, and scaling, and it is purely API-driven.
In our next blog, we’ll learn how Google’s Borg inspired the modern datacenter, and Kubernete’s beginnings as Google Borg.