As part of our new TestDrive 3.0 launch, this experience is under maintenance. We are working to bring you the next great version of PKS.
Table of Contents
- Before you begin
- View the Yaml Files
- Deploy Restaurant Review V1 application
- Upgrade Application To Add Persistent Volumes
- Check Page Views & Votes
- Roll Back Restaurant Review Application Upgrade
- Check Application Status
- Delete application and persistent volumes
Before You Begin
Before you begin this walkthrough, ensure you are logged onto the PKS desktop by following the instructions found here. If you ran out of time and coming back to this lab later, your clusters might be deleted so you need to authenticate and recreate k8s cluster as mentioned in Introduction to Pivotal Container Service Lab.
The rest-review appserver is a Sinatra application that reads and writes to a cache server (redisserver) as well as a Postgres backend database (restreview-db). Redis is used to store the number of page views whereas Postgres is used to persist the votes. As part of lab setup, container images have been built for you.
This diagram represents the application we are going to manage. The application consists of four separate Kubernetes deployments, each with own Load Balancer service. The frontend Web Server and a Redis Key Value store. The Redis store is implemented as a single Master with multiple workers. Each deployment defines a replica set for the underlying pods.
Deploy and Upgrade Restaurant Review Application to Add Persistent Volumes Application to Add Persistent Volumes
We will deploy our restaurant review application and submit a few votes to see how it works. Our application is completely ephemeral. If a pod dies, all of its state is lost. Not what we want for an application that includes a database and cache. Will will upgrade the application to take advantage of persistent volumes and verify that killing the pods does not remove the data.
Note: This Lab assumes that k8s cluster creation was already performed in Module 2. If you are starting with this Module and have not gone through lab 2, please perform the following steps from Module 2 before continuing.
- Login to PKS Cluster
- Deploy a Kubernetes Cluster
- Get Kubernetes Cluster Credentials
View the Yaml Files
In Module 3 we went through the details of the deployment, pod and service specs so we won't do that again here.
- Type cd C:\PKS\apps
- Type cat rest-review.yaml
Note that we can combine all of our deployments and services into a single file, and also notice that the image is harbor.vmwdemo.int/library/restreview-ui:V1 which is our local private registry called Harbor.
Deploy Restaurant Review V1 application
Now you can deploy your application. This is done using the kubectl apply command and pointing to the appropriate yaml configuration files. You may have to run get pods a couple of times until the STATUS changes to running. Note: if you jumped straight to this module without doing any of the earlier modules, your kubectl context would not be set. Execute the command pks get-credentials my-cluster to set the context.
- Type kubectl apply -f rest-review.yaml
This command creates all deployments and services defined in the Yaml. It will take a minute or so to come up.
- Type kubectl get pods
Get list of all the pods in the cluster
- Type kubectl get deployments
View your deployment
- Type kubectl get rs
View the number of replicas for this pod. It will only be one.
Describe The UI Pod For More Details
For details on your pod, you can describe it
- Type kubectl describe pods yelb-ui The describe command is your first stop for troubleshooting a deployment. The event log at the bottom will often show you exactly what went wrong.
Find External LoadBalancer IP
Access the restaurant review application from your browser. The first step is to look at the service that is configured with the Load Balancer. In our case that is the UI service:
- Type kubectl get svc
- Note the EXTERNAL-IP of the service yelb-ui. That is the IP to get to the NSX-T load balancer which will redirect external traffic to the yelb-ui pod. Note that the load balancer port is 80
Return to the web browser to see the running application.
View The Application
- Click on Google Chrome
- Enter the EXTERNAL-IP from the kubectl get svc. It should be something like 192.168.x.x
Enter Votes in the Application
The restaurant review application lets you vote as many times as you want for each restaurant.
Try opening multiple browsers and voting from each of them. You will see that the application
is caching the page views and persisting the vote totals to the Postgres database.
- Click on as many votes as you would like.
- Open Second browser tab, go to the application and try voting there as well. Note the page views are increasing as well.
Upgrade Application To Add Persistent Volumes
Our application is completely ephemeral. If we delete the pods, all of the voting and page view data is lost. We are going to a persistent volume, backed by a vSphere Virtual Disk that has a lifecycle independent of the pods and VMs they are attached to. For more information, check the storage section in Module 2 of this lab. We will see how quickly and easily you are able to define the volume mount and rollout a new version of this app without any downtime.
Kubernetes will simply create new pods with a new upgrade image and begin to terminate the pods with the old version. The service will continue to load balance across the pods that are available to run.
We are going to make two changes to this application. The first is very simple. We will add "Version 2" text to the UI page. This was done by modifying the container image associated with the yelb-ui deployment. The second change is to add Volume mount information to the Redis-Server deployment yaml file. We will also add a Storage Policy and a Persistent Volume Claim that will be used by our Pods. When the new pods are created, their filesystem will be mounted on a persistent VMDK that was dynamically created.
Note: Our application stores only the Page Views in the Redis cache, the Voting information is in Postgres container. We are only modifying the Redis container, so after the upgrade, Page Views will stay, but Voting data will go away when Postgres pods are deleted.
- Type cat rest-review-v2.yaml
- Notice that the image changed to harbor.vmwdemo.int/library/rest-review:V2
- Also notice that the redis-server spec includes a persistentVolumeClaim and where to mount the volume in the container.
Storage Class And Persistent Volume Claim
If you did not create the Storage Class and Persistent Volume Claim in Module 2, execute the following 2 commands to create a k8s Storage Class and persistent Volume Claim.
- Type kubectl apply -f redis-sc.yaml
- Type kubectl apply -f redis-slave-claim.yaml
Upgrade The Rest-review Deployment
- Type kubectl apply --record=true -f rest-review-v2.yaml
When we apply the new desired state to an existing deployment by changing its definition (in this case changing the container image that the pod is created with), Kubernetes will kill an old pod and add a new one. If we had multiple replicas running, the application would continue to function because at least one pod would always be running.
- Type kubectl get pods
You should see new pods creating and old terminating, but it happens fast. Try kubectl get pods until all are in STATUS Running.
View Upgraded Application
- Click on your Chrome Browser
- Refresh the Page and notice that the title of the app displays Version 2 and that your Votes are still there.
Note: You may need to hold the shift key down while reloading the page to get the new page.
Now let's Delete the Redis server and database pods. The replication controller will restart them, so let'see if our page views are still there.
Delete Redis Server and Database Pods
- Type kubectl get pods
Find the name of the Redis Server pod. In our case, it is redis-server-689565f5d6-84flj
- Type kubectl delete pod redis-server-###### where the ###### are the id from get pods
Deleting the Redis application server
- Type kubectl delete pod yelb-db-##### where the ###### is the id from get pods
Deleting the postgres database server
- Type kubectl get pods
Notice the pods are terminating and new pods are created
The persistent volume will be reattached to the appserver pod but not the postgres database pod
Check Page Views & Votes
- Refresh the browser page
- Note that the Page Views have not restarted
Remember that the actual votes were stored in our backend Postgres database, which we did not back with a persistent volume. So that data is gone. The page views were stored in our Redis cache and were backed by our persistent volume. So they survive the removal of the pods.
Roll Back Restaurant Review Application Upgrade
Uh oh!! Users aren't happy with our application upgrade and the decision has been made to roll it back. Downtime and manual configuration, right? Nope. Its a simple reverse of the upgrade process.
- Type kubectl rollout history deployment/yelb-ui
Notice that you have change tracking across all of your deployment revisions. In our case we have made only one change. So we will roll back to our original image.
- Type kubectl rollout undo deployment/yelb-ui --to-revision 1
- Type kubectl get pods
You should see terminating pods and new pods creating.
Check Application Status
Once they are all running, Go back to chrome and refresh the Browser again.
You should see that the Version 2 has been removed. Your page views plus any new votes that you added after the pod deletion are still there.
Delete application and persistent volumes
Run the following commands to delete the application
- kubectl delete -f rest-review.yaml
This will remove the application. We are deleting the version 1 of the app as we had rolled back the application
- kubectl delete -f redis-slave-claim.yamlThis will deallocate the persistent volume claim created earlier for Redis POD
- kubectl apply -f redis-sc.yaml
This will delete the storage class that was used to create the persistent volume claim