Microservices are one of the latest instances of the cosmic methodological swings of the cyberpendulum between “small, specific-task components” (e.g., subroutines, Unix commands) and “massive monoliths.” (And, they’re also a newish member of “pieces that talk to each other over networks,” along with Remote Procedure Calls, SOA, etc.)
One of the organizations working on platform infrastructures to support — create, test, deploy and manage — microservices architectures is the Cloud Foundry Foundation. Started in 2015, as an independent not-for-profit 501(c)6 Linux Foundation Collaborative Project, the Foundry currently consists of more than 185 incubating or active projects and is currently being used in hundreds of production environments, including many in the Global 2000. It’s in use at two of the top U.S. telco carriers, two of the world’s top three insurance companies — like AllState, Chase, JP Morgan, SwissCom and Verizon – and at least six Global 500 manufacturing companies, including GE.
For DevOpsers and others looking to learn more about microservices, here’s an (edited) interview I did with Sam Ramji, CEO of the Cloud Foundry Foundation.
Linux.com: What are microservices?
Sam Ramji: Microservices are an architectural style. What we’ve learned over the last few years is that they are a concept, an architectural pattern, a concept you implement through code. Codewise, they are quite small — usually five to thirty lines of code, although many are 100 or so lines and some are around 1,000 lines. Creating a web server in Node.js takes about five lines of code, for example.
Linux.com: What are the alternatives, e.g. “monolithic”?
Sam Ramji: Sometimes monolithic is the right approach to building a system. For example, if you need a lot of performance, and will be taking care of it in a customized way — assign a team of people to it, have it run on specific server(s). This can include enterprise Java applications, writing a big J2EE application and putting it into a WAR (Web Application Archive) file can be the right approach for some durable, industry-specific tasks.
Linux.com: What’s needed to create and use microservices?
Sam Ramji: They need an operational environment like Cloud Foundry’s Buildpacks, to package up code including all its dependencies. And they need a framework to run in, be tested, go through staging, and land in production. If you are going to do things the microservices way, you need a platform to operate in — that’s what a Cloud Native Apps Platform (CNAP) like Cloud Foundry is for.
Linux.com: What’s the business/technology motivation behind microservices, containers, cloud-native apps, etc.?
Sam Ramji: Basically, to let organizations of all sizes meet the technology needs of today’s business — which includes the ability to create and deploy new apps fast, for these apps to automatically scale up to handle millions of users and scale down again, to be able to make changes, and to have this be done by smaller teams. Microservices — small bits of code, and systems that don’t need lots of operations people to support them — have been making this possible. We have companies running hundreds of apps on thousands of servers, with only a handful of operations staff to manage them. One organization [Huawei] has around 4,500 production apps on about 20,000 servers, and they can launch 400 containers per minute if they need to.
Linux.com: How are microservices and containers different from previous “smaller piece” approaches?
Sam Ramji: Microservices and containers break the traditional connection between the code and their environment. Historically, your code had to ‘know’ things like IP addresses, user information access information — details that are custom, and specific to the server it’s running on… details often referred to as “the twelve factors.” That meant you couldn’t ‘move’ the code, or scale up or scale down. You need the code to be independent of the running environment. A microservice has factored out all those details — using the Twelve-Factor App lets you do this. The container makes sure that the microservice can call heavier-weight services, and it can all function. And this approach makes sure that more instances crank up as demand increases, and vice versa.
Linux.com: Why do you want to avoid “hardwiring” these details?
Sam Ramji: Because of the level — the amount of instances. A decade ago, a big J2EE deployment might be running on ten machines — that’s a finite and manageable number, and you could use Chef or Puppet to automatically do updates, and solve for the collateral effects of running a code technology that wasn’t designed to be moved around.
But when you try to work with lots of servers, like a thousand or more, the routing tables blow up — and data centers let us have ten, twenty thousand machines or more to work within a single fabric. DNS has become more scalable, but the older apps are inefficient, because of the way they were written… which wasn’t with this kind of scaling and distribution in mind. “Cloud-native” apps are, as the name suggests, intended and expecting to run in a cloud environment, and a cloud-native app platform (CNAP) has the ability to manage all the environmental details as well as handing them on-the-fly to the code.
Linux.com: Where does Cloud Foundry fit in?
Sam Ramji: Cloud Foundry is a cloud-native app platform. It’s an open source hardened production infrastructure — we’re an Open Platform-as-a-Service, intended for use by and in multi-vendor, multi-cloud, global enterprises.
Linux.com: What does Cloud Foundry include?
Sam Ramji: Our projects include:
- A Buildpacks system, notably the Java Buildpack, as well as buildpacks for Ruby, Go, Node.js, PHP, and Python, which takes microservice code and packages it into containers
- Diego, the Elastic Runtime layer — a container scheduling system, which manages all the flow of containers across thousands of servers.
- Cloud Controller, for managing the lifecycle of applications
- BOSH, a deployment engine, which, as our web site says, “unifies release engineering, deployment, and lifecycle management of small and large-scale cloud software… can provision and deploy software over hundreds of VMs… [and] also performs monitoring, failure recovery, and software updates with zero-to-minimal downtime.” For example, BOSH takes care of deploying Diego to multiple servers as you add servers to a Cloud Foundry, and is also responsible for managing heavy duty services like Cassandra, MySQL, and Mongo.
Linux.com: Who benefits more from microservices — developers, or production?
Sam Ramji: Both. It’s about DevOps. And it has to be. If you satisfy developers but don’t give operators a platform that can run, you don’t get continuous delivery. One important aspect of Cloud Foundry is that we are heavily focused on operations teams.
Linux.com: How can I get started learning about and trying Cloud Foundry?
Sam Ramji: You can get code from GitHub.com/CloudFoundry. And you can try it on code, as downloads, like with Microsoft Azure, and there are some on-demand options, like IBM’s Bluemix and Pivotal Web Services.
For more from Sam Ramji, check out his upcoming presentation “The Makings of a Modern Architecture” at LinuxCon.