When they founded CoreOS, Brandon Philips and Alex Polvi set out to essentially redesign the Linux operating system for distributed systems.
They began by looking at the areas where they thought the whole server infrastructure space could be improved. Then zeroed in on one of the hurdles of distributed systems: deployments — including application lifecycle management. They also realized that managing the lifecycle of all the files on disk — the traditional job of a package manager — is really hard.
“Package management has kind of failed in a lot of ways to be generic enough for people that aren’t distro maintainers to keep and manage them,” said Philips. “You don’t see a lot of organizations building their internal applications into debs and rpms; if they do, then those are very sophisticated ones and even they are a kind of challenge.”
They considered how to build an operating system for distributed systems that would also improve package management, including properties such as reproducibility, atomic rollback, and updates.
The result of their efforts is CoreOS, an open source, Linux-based operating system designed specifically for clusters, which also provides tools for managing applications inside containers. But the goal to redesign Linux package management for distributed systems has not yet been realized, in part due to a lack of standards around container technology.
Built on Docker
When they started prototyping CoreOS Linux, there was only one fundamental requirement to build CoreOS Linux: containers. Containers, of course, are not new in the Linux world, but at the time, they were misunderstood by the market. Their importance had yet to be recognized.
Then, when Philips and Polvi were close to finalizing the CoreOS product, they learned about Docker. Because Docker is an open source project, it fit in very well with what the CoreOS team was envisioning in a distributed system. The first release of CoreOS shipped with two components: etcd, the distributed system, and Docker runtime, runC. And, according to the CoreOS website, the main building block of CoreOS is the Docker container engine, where applications and code run.
A Split from Docker
Docker and CoreOS were kind of made for each other. CoreOS was the sort of minimalistic, always updated Linux distribution necessary to deploy containers and applications. But, as Docker started to grow bigger, it also started to expand its scope, and some differences cropped up.
“We have been building our product very heavily around Docker. But there were a number of things that we really wanted to influence at the Docker open source project,” said Philips. For example, Philips expressed concern that Docker runs as a daemon process that can potentially affect the availability of other processes. He also said they wanted to address signature verification and standard and open image formats, without implicit DNS names.
For a variety of reasons, however, the desired changes were not happening in Docker, so the CoreOS team decided to build a project called rkt (pronounced rocket) that was more in line with their vision of what a container runtime should do. They also introduced a specification called AppC that defines how to run applications in containers.
The CoreOS website states, “We still believe in the original premise of containers that Docker introduced, so we are doing something about it. While we are at it, we are cleaning up and fixing a few things that we’d like to see in a production ready container.” The features that CoreOS believes are important in the design of a container are: composability, security, image distribution, and open formats.
An Open Source Solution
The CoreOS team is not alone; other Docker users, such as Red Hat and Google, share these goals. So, in December of 2015, more than 40 stakeholders, including CoreOS, came together to form the Open Container Initiative (OCI) at The Linux Foundation, with the stated intention of creating open industry standards around container formats and runtime, and harmonizing with existing specifications including AppC. Philips said that so far OCI has focused on what it means for a process to run inside a Linux container.
“Linux containers are made up of all these discrete technologies and trying to standardize what a container means is a great goal,” Philips said. “But we are very far away from accomplishing everything that we wanted to accomplish with AppC and having an actual image format.”
OCI has just begun to tackle image formats and has not yet discussed other fundamental things like naming of containers and signing of images with cryptographic keys. And OCI may never address them because, according to him, some OCI members believe that these concerns are outside the scope of OCI.
So far, the project has focused on developing the open container specifications and the Docker-donated container runtime, runC. But as the technology layers of the stack mature around containers, the project’s scope may expand to other areas where innovation and acceleration are required.
In the meantime, the Cloud Native Computing Foundation (CNCF), which arose last July with the aim of creating and driving the adoption of a new set of common container technologies, may take on any work that doesn’t fall within the technical path of the OCI.
“If the OCI board says these things are out of scope or we can’t come to technical resolution, then we will put them to CNCF,” Philips said.
Containers Are the Next Package Manager
Philips and his team are concerned about issues like naming, image format, and signing in containers, because Philips believes that they have a responsibility here. “If we do it right, containers are the next evolution of Linux package management,” he said.
Philips went on to say that package management is the reason Linux has been such a success for the past 15 years. He said the convenience of being able to say, “install that thing that I know has a name,” and to have that thing magically appear on your machine, is amazing.