June 13, 2017

Run and Scale a Distributed Crossword Puzzle App with CI/CD on Kubernetes (Part 3)


crossword app
In this third installment of our series, we get to the main event: running our Kr8sswordz Puzzle application.

In Part 2 of our series, we deployed a Jenkins pod into our Kubernetes cluster, and we used it to set up a CI/CD pipeline that automated building and deploying our containerized Hello-Kenzan application in Kubernetes. That was a great accomplishment, but we’re not stopping there.

In Part 3, we are going to set aside the Hello-Kenzan application and get to the main event: running our Kr8sswordz Puzzle application. We will showcase various components of the app such as Ectd caching and persistence in MongoDB. We will also highlight built-in UI functionality to scale backend service pods up and down using the Kubernetes API, and then simulate a load test.  

Read all the articles in the series:

Before we start the install, it’s helpful to take a look at the pods we’ll run as part of the Kr8sswordz Puzzle app:

  • kr8sswordz - A React container with our Node.js frontend UI.

  • puzzle - The primary backend service that handles submitting and getting answers to the crossword puzzle via persistence in MongoDB and caching in ectd.

  • mongo - A MongoDB container for persisting crossword answers.

  • etcd - An etcd client for caching crossword answers.

  • monitor-scale - A backend service that handles functionality for scaling puzzle service up and down. This service also interacts with the UI by broadcasting websockets messages.

We will go into the main service endpoints and architecture in more detail after running the application. For now, let’s get going!

IMPORTANT: To complete these exercises, you’ll need a computer running an up-to-date version of Linux or macOS. Your computer should have 16 GB of RAM. For best performance, reboot your computer and keep the number of running apps to a minimum.

Exercise 1: Running the Kr8sswordz Puzzle App

First make sure you’ve run through the steps in Part 1 and Part 2, in which we set up our image repository and Jenkins pods—you will need these to proceed with Part 3. If you previously stopped Minikube, you’ll need to start it up again. Enter the following terminal command, and wait for the cluster to start:

minikube start --memory 8000 --cpus 2 --kubernetes-version v1.6.0

If you’d like, you can check the cluster status and view all the pods that are running with the following commands:

kubectl cluster-info

kubectl get pods --all-namespaces

Now let’s start the interactive tutorial for Part 3 with the following terminal commands:

cd ~/kubernetes-ci-cd

npm run part3

Remember, you don’t have to actually type the commands below—just press Enter at each step and the script will enter the command for you.

1. First we will run an etcd.sh script that will install a couple of components used by etcd: an operator that helps manage the etcd cluster, and a cluster service for storing and retrieving key values. The cluster service runs as three pod instances for redundancy.

Start the etcd operator and service on the cluster.


You may notice errors showing up as it is waiting to start up the cluster. This is normal until it starts.

If you’d like, you can see these new pods by entering kubectl get pods in a separate terminal window.

2. Now that we have an etcd service, we need an etcd client. The following command will set up a directory within etcd for storing key-value pairs, and then run the etcd client.

kubectl create -f manifests/etcd-job.yml

3. Check the status of the job in step 2 to make sure it deployed.

kubectl describe jobs/etcd-job

4. The crossword application is a multi-tier application whose services depend on each other. We will create three services in Kubernetes ahead of time, so that the deployments are aware of them.

kubectl apply -f manifests/all-services.yml

5. Now we're going to walk through an initial build of the monitor-scale service.

docker build -t`git rev-parse 
  --short HEAD` -f applications/monitor-scale/Dockerfile 

6. Set up a proxy so we can push the monitor-scale Docker image we just built to our cluster's registry.

docker stop socat-registry; docker rm socat-registry; 
  docker run -d -e "REGIP=`minikube ip`" --name socat-registry -p 
  30400:5000 chadmoon/socat:latest bash -c "socat 
  TCP4-LISTEN:5000,fork,reuseaddr TCP4:`minikube ip`:30400"

7. Push the monitor-scale image to the registry.

docker push`git rev-parse --short HEAD`

8. The proxy’s work is done, so go ahead and stop it.

docker stop socat-registry

9. Open the registry UI and verify that the monitor-scale image is in our local registry.

minikube service registry-ui


10. Create the monitor-scale deployment and service.

sed 's#'
  `git rev-parse --short HEAD`'#' applications/monitor-scale/k8s/deployment.yaml | 
  kubectl apply -f -

11. Wait for the monitor-scale deployment to finish.

kubectl rollout status deployment/monitor-scale

12. View pods to see the monitor-scale pod running.

kubectl get pods

13. View services to see the monitor-scale service.

kubectl get services

14. View ingress rules to see the monitor-scale ingress rule.

kubectl get ingress

15. View deployments to see the monitor-scale deployment.

kubectl get deployments

16. We will run a script to bootstrap the puzzle and mongo services, creating Docker images and storing them in the local registry. The puzzle.sh script runs through the same build, proxy, push, and deploy steps we just ran through manually for both services.


17. Check to see if the puzzle and mongo services have been deployed.

kubectl rollout status deployment/puzzle

18. Bootstrap the kr8sswordz frontend web application. This script follows the same build proxy, push, and deploy steps that the other services followed.


19. Check to see if the frontend has been deployed.

kubectl rollout status deployment/kr8sswordz

20. Check out all the pods that are running.

kubectl get pods

21. Start the web application in your default browser.

minikube service kr8sswordz

Exercise 2: Giving the Kr8sswordz Puzzle a Spin

Now that it’s up and running, let’s give the Kr8sswordz puzzle a try. We’ll also spin up several backend service instances and hammer it with a load test to see how Kubernetes automatically balances the load.   

1. Try filling out some of the answers to the puzzle. You’ll see that any wrong answers are automatically shown in red as letters are filled in.

2. Click Submit. When you click Submit, your current answers for the puzzle are stored in MongoDB.

3. Try filling out the puzzle a bit more, then click Reload once. This will perform a GET which retrieves the last submitted puzzle answers in MongoDB.

Did you notice the green arrow on the right as you clicked Reload? The arrow indicates that the application is fetching the data from MongoDB. The GET also caches those same answers in etcd with a 30 sec TTL (time to live). If you immediately press Reload again, it will retrieve answers from etcd until the TTL expires, at which point answers are again retrieved from MongoDB and re-cached. Give it a try, and watch the arrows.

4. Scale the number of instances of the Kr8sswordz puzzle service up to 16 by dragging the upper slider all the way to the right, then click Scale. Notice the number of puzzle services increase.


In a terminal, run kubectl get pods to see the new replicas.

5. Now run a load test. Drag the lower slider to the right to 250 requests, and click Load Test. Notice how it very quickly hits several of the puzzle services (the ones that flash light green) to manage the numerous requests. Kubernetes is automatically balancing the load across all available pod instances. Thanks, Kubernetes!


6. Drag the middle slider back down to 1 and click Scale. In a terminal, run kubectl get pods to see the puzzle services terminating.


7. Now let’s try deleting the puzzle pod to see Kubernetes restart a pod using its ability to automatically heal downed pods.

  a. In a terminal enter kubectl get pods to see all pods. Copy the puzzle pod name (similar to the one shown in the picture above).

  b. Enter the following command to delete the remaining puzzle pod.​

kubectl delete pod [puzzle podname]

  c. Enter kubectl get pods to see the old pod terminating and the new pod starting. You should see the new puzzle pod appear in the Kr8sswordz Puzzle app.

What’s Happening on the Backend

We’ve seen a bit of Kubernetes magic, showing how pods can be scaled for load, how Kubernetes automatically handles load balancing of requests, as well as how Pods are self-healed when they go down. Let’s take a closer look at what’s happening on the backend of the Kr8sswordz Puzzle app to make this functionality apparent.  


  1. When the Submit button is pressed, a PUT request is sent from the kr8sswordz UI to a pod instance of the puzzle service. The puzzle service uses a LoopBack data source to store answers in MongoDB. When the Reload button is pressed, answers are retrieved with a GET request in MongoDB, and the etcd client is used to cache answers with a 30 second TTL.  

  2. The monitor-scale pod handles scaling and load test functionality for the app. When the Scale button is pressed, the monitor-scale pod uses the Kubectl API to scale the number of puzzle pods up and down in Kubernetes.

  3. When the Load Test button is pressed, the monitor-scale pod handles the loadtest by sending several GET requests to the service pods based on the count sent from the front end. The puzzle service sends Hits to monitor-scale whenever it receives a request. Monitor-scale then uses websockets to broadcast to the UI to have pod instances light up green.

  4. When a puzzle pod instance goes up or down, the puzzle pod sends this information to the monitor-scale pod. The up and down states are configured as lifecycle hooks in the puzzle pod k8s deployment, which curls the same endpoint on monitor-scale (see kubernetes-ci-cd/applications/crossword/k8s/deployment.yml to view the hooks). Monitor-scale persists the list of available puzzle pods in etcd with set, delete, and get pod requests.

If you’re done working in Minikube for now, you can go ahead and stop the cluster by entering the following command:

minikube stop

Up Next

Now that we’ve run our Kr8sswordz Puzzle app, the next step is to set up CI/CD for our app. Similar to what we did for the Hello-Kenzan app, Part 4 will cover creating a Jenkins pipeline for the Kr8sswordz Puzzle app so that it builds at the touch of a button. We will also modify a bit of code to enhance the application and enable our Submit button to show green hits on the puzzle service instances in the UI. Stay tuned!  

Kenzan is a software engineering and full service consulting firm that provides customized, end-to-end solutions that drive change through digital transformation. Combining leadership with technical expertise, Kenzan works with partners and clients to craft solutions that leverage cutting-edge technology, from ideation to development and delivery. Specializing in application and platform development, architecture consulting, and digital transformation, Kenzan empowers companies to put technology first.

Curious to learn more about Kubernetes? Enroll in Introduction to Kubernetes, a FREE training course from The Linux Foundation, hosted on edX.org.

Click Here!