Greenfield buildouts are wonderful and we love them. We get to start from scratch and don’t have to worry about compatibility with existing servers and applications, and don’t have to struggle with preserving and migrating data. Greenfields are nice and clean and not messy. Greenfields are fun.
Sadly, back here in the real world, greenfield buildouts are rare, and we must till the brownfields of legacy systems. This is becoming an acute issue in our fun new era of clouds, containers, and microservices, which all look like wonderful technologies, but implementing them is not quite as easy as talking about them. In his presentation from LinuxCon North America, Richard Marshall of IAC Publishing Labs describes Ask.com’s adventures in navigating two decades of legacy infrastructure, and the many speedbumps and roadblocks along the way, to living the container native dream. It’s a great realistic guide on what to expect when your turn comes and how to deal with the inevitable difficulties.
Marshall tells us how the beginnings were innocuous enough: “About three years ago in the early end of 2014, the first glimmers of interest of the container concept started to emerge within the Ask development organizations…We spun up a pilot environment, tested things. It went very well, actually. It was stable. It did everything it said it was going to do.”
So far, so good. But the first speed bumps came early: “However, because this was one of those initiatives that was driven more on the interest in the technology and less of an actual business driver, we ended up at a bit of an impasse where the developers wouldn’t buy into the process of working towards putting real applications on this until operations gave them a timeline for when we would be able to go to production. Ops reciprocally wouldn’t do that until dev bought into it…That kind of catch-22 lingered for a while, and eventually we just let the pilot environment rot in place. It’s still there,” says Marshall.
Brave New Container World
Time passed and there it sat. But the buzz amplified, and tech news was all about Docker, Kubernetes, Mesos, orchestration, continuous integration, virtual machines, all the promises of the brave new container world. Marshall’s teams launched some new pilot projects using Docker, Kubernetes, and VMs, which succeeded to the point that most of the dev teams were using them. Marshall says, “The further we got with that pilot, the more it became apparent that to have any reasonable timelines for getting to production, we would need some sort of on-ramp that didn’t actually include all of the complexities at once.”
Despite the success in deploying Kubernetes, Marshall’s team realized they would have to take a step or two back and replace it with the Kubernetes-based OpenShift Origin. “That decision did kind of upend a lot of what we were doing, and required some rethinking of how we were going to make that happen…So far we’ve only run into a few problems with the differences between the Kubernetes exposed by OpenShift and the bare Kubernetes that we were running before. Last week, we launched our first front-end production service on Docker, serving about 10 million requests per day. We will finish deploying the rest of that service, and hopefully that will jump that figure up to about 40 million requests per day,” he says.
Some of the difficulties were caused by fascination with the technologies, rather than having business reasons to deploy them, and diverting resources from other projects. Some were delays caused by security testing. Marshall says one of the biggest speed bumps was “The learning curve was probably the most challenging thing that we had to overcome in the months and year leading up to our first production deployment.”
Watch the full video (below) to learn in detail the challenges Marshall’s team faced and how they overcame them.