Microservices, Service Mesh, and CI/CD Pipelines: Making It All Work Together
Brian Redmond, Azure Architect on the Global Black Belt team at Microsoft, showed how to build CI/CD pipelines into Kubernetes-based applications in a talk at KubeCon + CloudNativeCon.
Applications that get deployed via a CI/CD pipeline, get patches and new components added to them all the time, usually using what is called a "blue/green" update process: While a "blue" (stable and tested) version of the application runs for the users, a "green" version (originally the same as the blue version, but which gets updates applied to it) remains idle, being tested. When the green version is considered tested enough to deploy, it is made available to the user, and the blue version is made idle. If the green version fails, the blue version can be redeployed, and the green version will be taken offline to correct its faults.
During a CI/CD-based deployment, it is usual to carry out canary-testing instead of having an all blue or all green instance of the application accessible by the users. This means that, while most users, say 90 percent, get to use the blue, stable version of the application, a smaller percentage, the remaining 10 percent, gets to use the version being tested. This allows developers to see how the "test" version behaves under real-world conditions. That is, two different versions of the application are often running at the same time.
To further complicate matters, most applications are not single and monolithic, but a series of microservices that must communicate effectively with each other. This means you need:
- Advanced routing — A mechanism which will allow routing traffic to specific versions of specific services using specific routing rules.
- Observability — This will allow you to gather metrics and see what is happening and tell you what happens when traffic hits the canary test.
- Chaos testing — A testing model that shows what happens when things go wrong.
A pipeline in such a scenario would look like this: An update to the code is taken as a pull request and is deployed as a canary build. You would modify the routing to push some traffic over to that release and then score the release. If the release scores above what you have established as an acceptable level, you would automatically push it to production. If it scores below -- or would require some sort of human interaction to find out what issues it has — you would decommission it completely and the update would be rejected.
This is where Istio comes in. Istio is an open platform to connect manage and secure microservices. It helps with service discovery and routing, provides a sidecar (Envoy) that controls where traffic is going, and takes care of health checking and security, among other many features.
When you deploy using Istio, each service has an Envoy proxy as a sidecar. All the traffic from each service going anywhere outside the pod is routed through the proxy. The sidecar will also handle telemetry, delays, etc. At the control plane layer, the Istio components allow you to manage how the proxies behave.
Redmond also includes Kashti in his toolbox. Developed by the same team that developed Brigade, Kashti is web dashboard which provides easy viewing and constructing of Brigade pipelines.
During the demo, Redmond deployed a web app with several APIs, modified a branched version and deployed it as a canary test version. He showed who Istio's observability features allowed to follow every step of the pipeline and, using Prometheus coupled with the Graphana, track the performance peaks and valleys of each deployed version.
Watch the entire presentation below:
Learn more about Kubernetes at KubeCon + CloudNativeCon Europe, coming up May 2-4 in Copenhagen, Denmark.