Pivotal Container Service - Quick Start

Overview


Before You Begin


Before you begin this walkthrough ensure you are logged onto the PKS desktop by following the instructions found here.

Please do NOT create more than 2 k8s clusters.

Also, we encourage everyone to use their H:\ drive located on the PKS Horizon Desktop to store yaml files and container images. This drive is persistent which means that users can retrieve their stored application data across multiple sessions.

Walkthrough Video


 

 

Section 1: Log in to the PKS CLI


On your Pivotal Container Service Desktop, launch PowerShell

Run the following command to get the PKS version and a list of available PKS CLI options

pks -h

Picture1.png

 

Login to the PKS environment using your PKS credentials from the Pivotal Container Service tile on the TestDrive Portal

pks login -a pks-api.vmwdemo.int -u <username> -p <password> -k

Picture3.png

 

Section 2: Creating a Kubernetes Cluster


On your Pivotal Container Service Desktop, launch PowerShell

Deploy a Kubernetes Cluster using the command below, following the cluster naming convention of <username>-<num>. Cluster creation can take somewhere between 10 to 15 minutes based on the active load on the system infrastructure

pks create-cluster <username>-1 -e <username>-1.vmwdemo.int --plan small

The domain name "*.vmwdemo.int" should remain the same as this is the domain name for this environment. Please follow the following naming convention 

Picture4.png

NOTE: Please do NOT create more than 2 k8s clusters

 

When cluster creation is initiated, BOSH creates k8s Master and Worker instances on an ESX datastore. To check this out, launch the vSphere Client shortcut on your desktop and login using the credentials given below, and observe the VMs getting created under the Compute Cluster.

URL: https://pks-vc-1.vmwdemo.int/vsphere-client
Username: pksdemo@vsphere.local
Password: PKSdemo123!

Picture7.png

 

Next, let’s observe the automated network creation in NSX. Launch the NSX Manager shortcut from desktop and login using the credentials below. To provide a strong isolation boundary, PKS spawns new routers, logical switches and load balancers for each namespace in the k8s cluster. Each of these network infrastructure entries includes the cluster UUID in their name, making it easy for users to identify the ones created for their cluster

URL: https://pks-nsxtmgr-1.vmwdemo.int 
Username: audit
Password: PKSdemo123!

T1 router with router name same as the UUID of the created clusterPicture8.png

Logical switches with the same UUIDPicture9.png

Load Balancer created automatically for our clusterPicture10.png

               

At this point, we will go back to the PKS CLI and check if cluster has been deployed.

Run the following command to list down all the k8s clusters created by your user

pks clusters

Picture5.png

Get details of the cluster you created by using the command below

pks cluster <username>-1

Picture12.png

 

Section 3: Preparing to Deploy your Application


Get k8s Cluster Credentials

This command will populate the kube config file with the right credentials for the cluster

pks get-credentials <username>-1

Picture13.png

List all the namespaces

kubectl get namespaces

Picture14.png

List the PODs in all namespaces

kubectl get pods --all-namespaces


Picture15.png


Section 4: Deploy Restaurant Review Application


Now you can deploy your application. This is done using the kubectl apply command and pointing to the appropriate yaml configuration files. You may have to run get pods a couple of times until the STATUS changes to running 

cd C:\PKS\apps

cat rest-review.yaml

Picture16.png

 

Note that we can combine all of our deployments and services into a single file, and also notice that the image is harbor.vmwdemo.int/library/restreview-ui:V1 which is the private Container Registry called Harbor. 

Harbor showing the rest-review application

Picture17.png

Harbor with vulnerability scanning

Picture19.png

 

Run the application

This command creates all deployments and services defined in the yaml. It will take a minute or so to come up

kubectl apply -f rest-review.yaml 

Get list of all the pods in the cluster

kubectl get pods

View your deployment

kubectl get deployments

View the number of replicas for this pod. It will only be one in this case

kubectl get rs

Picture20.png

Get External LoadBalancer IP

Get external IP to reach your application by running the command below and fetching the External IP against yelb-ui. That is the IP to get to the NSX-T load balancer which will redirect external traffic to the yelb-ui pod. Note that the load balancer port is 80

kubectl get svc

Picture21.png

 

View The Application

  1. Open  Google Chrome
  2. Enter the EXTERNAL-IP from the kubectl get svc. It should be in the format 192.168.x.x

Picture22.png

Enter Votes in the Application

The restaurant review application lets you vote as many times as you want for each restaurant. Try opening multiple browsers and voting from each of them. You will see that the application is caching the page views and persisting the vote totals to the Postgres database.

Play around with Votes and page views

Picture23.png

Show LB rules in NSX

Load balancer for this IP

Picture24.png


Picture25.png


Delete application

Run the following commands to delete the application

kubectl delete -f rest-review.yaml

Picture26.png

 

We recommend users to "NOT" delete their clusters after their demo is complete as we have an automation system in place that will cleanup clusters and their associated NSX Objects

 

Appendix A: TestDrive PKS Architecture Diagram


 

PKS_POD_INFRASTRUCTURE_DIAGRAM__1_.jpeg

This topology has the following characteristics:

  • PKS control plane (Ops Manager, BOSH Director, and PKS VM) components are using corporate routable IP addresses.
  • Kubernetes cluster master and worker nodes are using corporate routable IP addresses.
  • The PKS control plane is deployed outside of the NSX-T network and the Kubernetes clusters are deployed and managed within the NSX-T network. Since BOSH needs routable access to the Kubernetes Nodes to monitor and manage them, the Kubernetes Nodes need routable access.

 

Walkthrough Summary


 

For Additional Support


Review Our Knowledge Base

Have more questions? Submit a request

Please sign in to leave a comment.