StackEngine’s Boyd Hemphill: How Docker is Changing DevOps

103

Boyd-Hemphill-StackEngine“Docker is Linux containers for mere mortals,” Boyd Hemphill is fond of saying. The Director of Evangelism at container application management startup StackEngine organizes Docker Austin meetups, DevOps Days Austin and Container Days events. He has recently given a number  of Docker 101 workshops around the country aimed at introducing DevOps professionals to the business advantages of embracing containers and the disposable development environments that they enable.

“What Docker did is they came up with a beautiful way to package and pipeline the use of containers for people with day jobs that aren’t leveraging billions of dollars of infrastructure, i.e. mere mortals,” Hemphill said. “Docker allows us to think about those things in ways that are useful to us in our DevOps jobs.”

Hemphill will speak about “Minimum Viable Production with Docker Containers” at ContainerCon in Seattle on Aug. 18. Here he tells us more about StackEngine, the use of Docker and Linux containers in DevOps, and how containers are changing DevOps practices.

Linux.com: What is StackEngine? Are you essentially building the Puppet or Chef equivalent for higher level container automation?

Boyd Hemphill: StackEngine provides a software solution to managing your Docker infrastructure. Today the phrase we in DevOps champion is, “Cattle, not Pets.”  Tomorrow it will be, “Ants, not Cattle.” Yesterday you had 100 physical host pets. Today 1,000 virtualized or cloud server cattle and when one gets sick you cull it from the herd. Tomorrow you will  have 10,000 or more Docker container ants. You won’t know where they are, you will lose track of them. With StackEngine you are able to describe the desired state of your infrastructure and allow the Container Application Center to deploy, manage and report upon its state.

At their core, Puppet and Chef are concerned with the state of a host’s configuration. They prep a host to receive a software application. Docker removes the need for much of this configuration.  Indeed, companies such CoreOS and Rancher Labs remove package management from the OS altogether in favor of containers.  So, new software needs to be written to manage the application containers and their supporting services rather than the idempotent state of the machines.  The problem of application management is what StackEngine was formed to address.

Does StackEngine have an open source component?

Hemphill: Our product is currently closed source. We just look at that as a wise business decision at this point. Right now we think the right choice is to keep our code to ourselves. If the market indicates a need to open source, then we will make that choice.  It is, however, a choice you can only make once. There are parts of our code that we do intend to open source. For example, we will be open sourcing our API bindings as soon as we have them in a presentable state.

Tell me something I don’t already know about Docker.

Hemphill: One of the biggest questions we’re getting from big organizations is, “Do I really have to write all my stuff with microservices?” And the answer is no. We’re seeing a lot of successful Docker usage by people who have just taken their big monolithic application, put it in a Docker container and activated it in different code paths. They use different environment variables and commands to run the containers and achieve some of the benefits that wouldn’t have been available in a virtual machine. Tackling the problem of which legacy applications to “Dockerize” and how to actually do it is a prevalent question at this time.

Why use Docker then?

Hemphill: There’s a business advantage. In the virtualization wave of the cloud, only around 30 percent of a physical host’s capacity was used, leaving 70 percent still drawing power. The waste typically gets to millions of dollars for any company of size.

With Docker the claim is you can reach 90 percent process density. We’re seeing 60-80 percent, realistically. Whether we achieve 90 or not, we’re essentially doubling the amount of density we can get, thus using only half the infrastructure, thus half the power. For any company of size, that is a real cost worth keeping your eyes on.

What are the benefits for developers and DevOps professionals?

Hemphill: When you talk about developers, feature velocity is really key. Features mean money. So if moving the buy button from one side of the screen to the other drives more sales, then I win. If I can measure that through A/B testing, then I have proof I am winning, not hunches. If competitors aren’t doing that, then I’m capturing market share. A/B testing existed long before Docker, however Docker makes it easier to reason about and perform technologically. A disposable development environment is the first step there – just being able to spin up a development environment by typing: “boot2docker up”.

Right now, most developers are using what I jokingly refer to as hand-built, artisanal, bespoke development environments. When you pull on one thread, such as updating your Java libraries, the whole thing can come unraveled. This teaches developers to be terrified of actually trying to upgrade because, first of all, it’s going to take them a day to do it, and then if it blows up in their face they have to back all of that out. Then they have to take another day to rebuild their development environment. It impedes the developers  from taking the risk of getting that new language version in place that would potentially save them time. Think of the other sorts of risks your developers could take knowing they could get back to a workable state in moments. We are looking at another real boon to innovation.

With a disposable development environment you can change a single configuration line, like, “I want to use Java 8.” You say go, it comes up and everything is good – you can move forward. Or the tests fail and now you have a value proposition – it’s a business decision about time to fix the discovered issues versus the perceived rewards of the upgrade.  The key difference is the developer only spent minutes discovering there were issues. The developer can take the risk of innovating and if they do that they’re going to win more often and that organization is going to overtake its competition.

How do you think containers are changing DevOps?

Hemphill: I look at DevOps — not as continuous delivery or Chef or Puppet or Docker — but the way a technology organization embeds itself in a business, for the benefit of that business. Instead of trying to be prescriptive, I’m trying to enable thought in the context of the problem you’re trying to solve.

Containers really enable DevOps thinking because it moves the microservices notion down to the infrastructure where it’s much easier to think about. Previously it happened at the code level in a service-oriented architecture, which only the very best developers ever really understood and were able to do.

How does Docker affect DevOps?

Hemphill: It enables more of these models to be implemented in easier or more effective ways. Docker doesn’t change DevOps, it enables more people to engage in DevOps thinking, and for that thinking to be more effective.

We tend to equate Docker and containers right now; that’s wrong. But in this specific case, I mean Docker. Container technology has been around since 1998, arguably since 1982. What Docker did is they came up with a beautiful way to package and pipeline the use of containers for people with day jobs that aren’t leveraging billions of dollars of infrastructure, i.e. mere mortals. Docker allows both Dev and Ops to think about a microservices architecture from their respective roles in a much easier way. It provides a common ground for this innovation. Thanks Docker!

So you’re saying DevOps brings a traditional tech role in IT closer to business operations, which seems like a big evolution.

Hemphill: Dave Mangot, a DevOps speaker and systems thinker, happened to be in town in March and spoke at Austin DevOps Meet Up. In DevOps a lot of people talk about tightening feedback loops – that’s a way of working so you have smaller batch sizes. When Dave drew his feedback loop it included the customer, and ultimately if you think about it, a system that doesn’t include the customer is a broken system.

Do you know why they call it a production environment? Because it produces revenue. That’s why it’s so important… It makes money. How? There are a lot of ways. But ultimately our customers need to tell us by action – not by words – but by buying or registering for the software or service that we’re providing.

DevOps should include everything from the genesis of the idea at some project manager, to the implementation of the idea in development, to testing and running the idea through operations and getting feedback through a customer support mechanism in the real world.

How does combining containers with DevOps enable innovation?

Hemphill: You have to ask yourself, “How do you know that a Docker container is doing what it’s supposed to do?” This is a big question especially around security. Although how do you know that Ruby library you just pulled out of GitHub is doing what it’s supposed to do? The notion of trust existed before Docker, so to say that Docker is insecure is misplaced.

But imagine instead of thinking, “What’s inside that container – what’s its operating system, what coding language is it using, etc.?” Instead, I’ll see whatever Docker image comes out of the other end of the build pipeline as a black box, and I’ll test that box as an Ops guy  to make sure it complies with all the stuff that I need (requests per second, performs the following functions in the agreed upon way, etc.) By doing that we tighten the feedback loop.

Apply that same thinking to the practice of security. Today what we have is a lot of governance: “you may not use that package,” “you may only use these things.” That really kills innovation because developers can’t get to the newest thing that other developers have created. Tomorrow we can have more compliance, “Your container failed these security tests, please fix it.”  I hope you can see how that fits in the build pipeline central to a DevOps practice.

Can you give me an example?

Hemphill: There’s new stuff being added to the Linux kernel all the time such as cgroups, which started this whole thing. Developers really wanted to be able to take advantage of this but some were stuck using Ubuntu 10.04 because that’s what policy dictates and the security people can’t move fast enough to say 12.04 or 14.04 is indeed ok. Then you have this very rigid, slow-moving  business unit (IT) that can’t deliver change to customers at the rate at which those customers  want to accept change. Keep in mind that the security people don’t want to be this bottleneck, they simply cannot keep up with the rapid changes in software and the exploits by hackers. It’s a tough spot.

So if we could build measures around containers to determine that they’re safe and do the things they’re supposed to do, in other words black-box thinking, we can accelerate feature velocity and innovation. Sharing disposable development environments allows people to just do a pull and go in a very easy way without a whole lot of human process. And if something really bad happens, it’s just a matter of hours, not days, to roll back the changes (e.g. SQL injection hole) or roll forward to a solution (e.g. heartbleed).

Does Docker change the game for DevOps?

Hemphill: We visited a company implementing Websphere for all these different customers. So they have literally hundreds of different permutations of Websphere and Java that they need to run and test each time they want to make a change. By getting Websphere stuffed into a container and taking advantage of the fact that a container starts in milliseconds, versus a virtual machine in a couple of minutes, they were able to reduce build times from 16 hours down to 3 hours. Which means that developers could push a few times per day and find out through the build grid, what was going on.

That was a real tightening of the feedback loop. And that was just in their first attempt, so who knows what they’ve ratcheted it down to now! They were also able to cut more than half of the physical machines out of the build process, saving a ton of capital and operational expenditure and a whole bunch of power.

Docker doesn’t change the game to something different, it makes the game easier to play for more people.