Deploy and Manage a Multi-Tiered Application Walkthrough

Before You Begin

Before you begin this walkthrough, ensure you are logged onto the PKS desktop by following the instructions found here.

Introduction

In this module you are going to deploy an application called yelb. It provides a simple capability to vote on your favorite restaurant. There is a front-end component called restreview-ui that fullfills a couple of roles. The first role is to host the Angular 2 application (i.e. the UI). When the browser connects to this layer it downloads theJavascriptcode that builds the UI itself. Subsequent calls to other application components are proxied via the nginx service running the ui.
The rest-review appserver is a Sinatra application that reads and writes to a cache server (redisserver) as well as a Postgres backend database (restreview-db). Redis is used to store the number of page views whereas Postgres is used to persist the votes. As part of lab setup, container images have been built for you.

This diagram represents the application we are going to manage. The application consists of four separate Kubernetes deployments, each with own Load Balancer service. The frontend Web Server and a Redis Key Value store. The Redis store is implemented as a single Master with multiple workers. Each deployment defines a replica set for the underlying pods.

Deploy and Upgrade Restaurant Review Application to Add Persistent Volumes Application to Add Persistent Volumes

We will deploy our restaurant review application and submit a few votes to see how it works. Our application is completely ephemeral. If a pod dies, all of its state is lost. Not what we want for an application that includes a database and cache. Will will upgrade the application to take advantage of persistent volumes and verify that killing the pods does not remove the data.

  

Note: This Lab assumes that k8s cluster creation was already performed in Module 2. If you are starting with this Module and have not gone through lab 2, please perform the following steps from Module 2 before continuing.

  1. Login to PKS Cluster
  2. Deploy a Kubernetes Cluster
  3. Get Kubernetes Cluster Credentials

 

View the Yaml Files

2-24.png

In Module 3 we went through the details of the deployment, pod and service specs so we won't do that again here.

  1. Type cd C:\PKS\apps
  2. Type cat rest-review.yaml

Note that we can combine all of our deployments and services into a single file, and also notice that the image is harbor.vmwdemo.int/library/restreview-ui:V1 which is our local private registry called Harbor.

Deploy Restaurant Review V1 application

2-17.png

Now you can deploy your application. This is done using the kubectl apply command and pointing to the appropriate yaml configuration files. You may have to run get pods a couple of times until the STATUS changes to running. Note: if you jumped straight to this module without doing any of the earlier modules, your kubectl context would not be set. Execute the command pks get-credentials my-cluster to set the context.

  1. Type kubectl apply -f rest-review.yaml
    This command creates all deployments and services defined in the Yaml. It will take a minute or so to come up.
  2. Type kubectl get pods
    Get list of all the pods in the cluster
  3. Type kubectl get deployments
    View your deployment
  4. Type kubectl get rs
    View the number of replicas for this pod. It will only be one.

 

Describe The UI Pod For More Details

2-18.png

For details on your pod, you can describe it

  1. Type kubectl describe pods yelb-ui
  2. The describe command is your first stop for troubleshooting a deployment. The event log at the bottom will often show you exactly what went wrong.

 

Find External LoadBalancer IP

2-19.png

Access the restaurant review application from your browser. The first step is to look at the service that is configured with the Load Balancer. In our case that is the UI service:

  1. Type kubectl get svc
  2. Note the EXTERNAL-IP of the service yelb-ui. That is the IP to get to the NSX-T load balancer which will redirect external traffic to the yelb-ui pod. Note that the load balancer port is 80

Return to the web browser to see the running application.

 

View The Application

2-20.png

  1. Click on Google Chrome
  2. Enter the EXTERNAL-IP from the kubectl get svc. It should be something like 192.168.x.x

 

Enter Votes in the Application

The restaurant review application lets you vote as many times as you want for each restaurant.
Try opening multiple browsers and voting from each of them. You will see that the application
is caching the page views and persisting the vote totals to the Postgres database.

2-21.png

  1. Click on as many votes as you would like.
  2. Open Second browser tab, go to the application and try voting there as well. Note the page views are increasing as well.

 

Upgrade Application To Add Persistent Volumes

Our application is completely ephemeral. If we delete the pods, all of the voting and page view data is lost. We are going to a persistent volume, backed by a vSphere Virtual Disk that has a lifecycle independent of the pods and VMs they are attached to. For more information, check the storage section in Module 2 of this lab. We will see how quickly and easily you are able to define the volume mount and rollout a new version of this app without any downtime.
Kubernetes will simply create new pods with a new upgrade image and begin to terminate the pods with the old version. The service will continue to load balance across the pods that are available to run.
We are going to make two changes to this application. The first is very simple. We will add "Version 2" text to the UI page. This was done by modifying the container image associated with the yelb-ui deployment. The second change is to add Volume mount information to the Redis-Server deployment yaml file. We will also add a Storage Policy and a Persistent Volume Claim that will be used by our Pods. When the new pods are created, their filesystem will be mounted on a persistent VMDK that was dynamically created.

Note: Our application stores only the Page Views in the Redis cache, the Voting information is in Postgres container. We are only modifying the Redis container, so after the upgrade, Page Views will stay, but Voting data will go away when Postgres pods are deleted.

2-27.png

  1. Type cat rest-review-v2.yaml 
  2. Notice that the image changed to harbor.vmwdemo.int/library/rest-review:V2
  3. Also notice that the redis-server spec includes a persistentVolumeClaim and where to mount the volume in the container.

 

Storage Class And Persistent Volume Claim

2-28.png

If you did not create the Storage Class and Persistent Volume Claim in Module 2, execute the following 2 commands to create a k8s Storage Class and persistent Volume Claim.

  1. Type kubectl apply -f redis-sc.yaml
  2. Type kubectl apply -f redis-slave-claim.yaml

 

Upgrade The Rest-review Deployment

2-34.png

  1. Type kubectl apply --record=true -f rest-review-v2.yaml
    When we apply the new desired state to an existing deployment by changing its definition (in this case changing the container image that the pod is created with), Kubernetes will kill an old pod and add a new one. If we had multiple replicas running, the application would continue to function because at least one pod would always be running.
  2. Type kubectl get pods
    You should see new pods creating and old terminating, but it happens fast. Try kubectl get pods until all are in STATUS Running.

 

View Upgraded Application

2-29.png

  1. Click on your Chrome Browser
  2. Refresh the Page and notice that the title of the app displays Version 2 and that your Votes are still there. 
    Note: You may need to hold the shift key down while reloading the page to get the new page.

Now let's Delete the Redis server and database pods. The replication controller will restart them, so let'see if our page views are still there.

 

Delete Redis Server and Database Pods

2-30.png

  1. Type kubectl get pods
    Find the name of the Redis Server pod. In our case, it is redis-server-689565f5d6-84flj
  2. Type kubectl delete pod redis-server-###### where the ###### are the id from get pods
    Deleting the Redis application server 
  3. Type kubectl delete pod yelb-db-##### where the ###### is the id from get pods
    Deleting the postgres database server
  4. Type kubectl get pods
    Notice the pods are terminating and new pods are created

The persistent volume will be reattached to the appserver pod but not the postgres database pod

 

Refresh Browser

2-31.png

  1. Refresh the browser page
  2. Note that the Page Views have not restarted

Remember that the actual votes were stored in our backend Postgres database, which we did not back with a persistent volume. So that data is gone. The page views were stored in our Redis cache and were backed by our persistent volume. So they survive the removal of the pods.

 

Roll Back Restaurant Review Application Upgrade

Uh oh!! Users aren't happy with our application upgrade and the decision has been made to roll it back. Downtime and manual configuration, right? Nope. Its a simple reverse of the upgrade process.

2-32.png

  1. Type kubectl rollout history deployment/yelb-ui
    Notice that you have change tracking across all of your deployment revisions. In our case we have made only one change. So we will roll back to our original image.
  2. Type kubectl rollout undo deployment/yelb-ui --to-revision 1
  3. Type kubectl get pods
    You should see terminating pods and new pods creating.

 

Refresh Browser

2-33.png

Once they are all running, Go back to chrome and refresh the Browser again.

You should see that the Version 2 has been removed. Your page views plus any new votes that you added after the pod deletion are still there.

 

Have more questions? Submit a request

Article is closed for comments.