VMware TKGI (formerly Enterprise PKS) - Quick Start


Before You Begin

Before you begin this walkthrough ensure you are logged onto the VMware Tanzu desktop by following the instructions found here.

Please do NOT create more than 2 k8s clusters.

Also, we encourage everyone to use their My Documents folder located on the VMware Tanzu Horizon Desktop to store yaml files and container images. This folder is persistent which means that users can retrieve their stored application data across multiple sessions.

Section 1: Log in to the PKS CLI

On your VMware Tanzu Desktop, launch PowerShell

Run the following command to get the PKS version and a list of available PKS CLI options

pks -h

Login to the TKGI environment using your TKGI credentials from the VMware Enterprise PKS tile on the TestDrive Portal

pks login -a pks-api.vmwtd.com -u <username> -p <password> -k

Section 2: Creating a Kubernetes Cluster

On your VMware Tanzu Desktop, launch PowerShell and run the command to list your PKS clusters. You will see a cluster in the format <username>-1

pks clusters

NEW: With our latest release of TestDrive PKS Demo, one cluster will come pre-created with the UAAC user that's assigned to you. Users can either run apps directly on this cluster, or create an additional cluster to demo the cluster creation process. We have set user limitations that will prevent creation of a third cluster.

Deploy a Kubernetes Cluster using the command below, following the cluster naming convention of <username>-<num>. Cluster creation can take somewhere between 10 to 15 minutes based on the active load on the system infrastructure

pks create-cluster <username>-2 -e <username>-2.vmwtd.com --plan small

The domain name "*.vmwtd.com" should remain the same as this is the domain name for this environment. Please follow the following naming convention

When cluster creation is initiated, BOSH creates k8s Master and Worker instances on an ESX datastore. To check this out, launch the vSphere Client shortcut on your desktop and login using the credentials given below, and observe the VMs getting created under the Compute Cluster.

URL: https://vca-1.vmwtd.com/ui
Username: [email protected]
Password: PKSdemo123!

Next, let’s observe the automated network creation in NSX. Launch the NSX Manager shortcut from desktop and login using the credentials below. To provide a strong isolation boundary, PKS spawns new routers, logical switches and load balancers for each namespace in the k8s cluster. Each of these network infrastructure entries includes the cluster UUID in their name, making it easy for users to identify the ones created for their cluster

T1 router with router name same as the UUID of the created cluster

Logical switches with the same UUID

Load Balancer created automatically for our cluster

At this point, we will go back to the PKS CLI and check if cluster has been deployed.

Run the following command to list down all the k8s clusters created by your user

pks clusters

Get details of the cluster you created by using the command below

pks cluster <username>-1

Section 3: Preparing to Deploy your Application

Get k8s Cluster Credentials

This command will populate the kube config file with the right credentials for the cluster

pks get-credentials <username>-1

List all the namespaces

kubectl get namespaces

List the PODs in all namespaces

kubectl get pods --all-namespaces

Octant is a developer-centric web interface for Kubernetes that lets you inspect a Kubernetes cluster on which applications reside. To help a developer better understand the state of the application running inside the cluster, Octant's dashboard allows you to navigate through your namespaces and objects they contain. It lets you visualize the relativity of objects and resources. Unlike the Kubernetes Dashboard, Octant runs locally on your workstation and uses your Kubernetes credentials to access the cluster, thus avoiding a whole class of security concerns. For more information, refer to our blog.

From your PowerShell Window on your VMware Tanzu Horizon Desktop, launch Octant.


Octant should immediately launch your default web browser on

Section 4: Deploy Restaurant Review Application

Now you can deploy your application. This is done using the kubectl apply command and pointing to the appropriate yaml configuration files. You may have to run get pods a couple of times until the STATUS changes to running

cd C:\PKS\apps

cat rest-review.yaml

Note that we can combine all of our deployments and services into a single file, and also notice that the image is harbor.vmwtd.com/library/restreview-ui:V1 which is the private Container Registry called Harbor.

Harbor showing the rest-review application

Harbor with vulnerability scanning

Run the application

This command creates all deployments and services defined in the yaml. It will take a minute or so to come up

kubectl apply -f rest-review.yaml 

Get list of all the pods in the cluster

kubectl get pods --all-namespaces

View your deployment

kubectl get deployments

View the number of replicas for this pod. It will only be one in this case

kubectl get rs

Get External LoadBalancer IP

Get external IP to reach your application by running the command below and fetching the External IP against yelb-ui. That is the IP to get to the NSX-T load balancer which will redirect external traffic to the yelb-ui pod. Note that the load balancer port is 80

kubectl get svc

View The Application

  1. Open  Google Chrome
  2. Enter the EXTERNAL-IP from the kubectl get svc. It should be in the format 192.168.x.x

Enter Votes in the Application

The restaurant review application lets you vote as many times as you want for each restaurant. Try opening multiple browsers and voting from each of them. You will see that the application is caching the page views and persisting the vote totals to the Postgres database.

Play around with Votes and page views

Show LB rules in NSX

Load balancer for this IP

Delete application

Run the following commands to delete the application

kubectl delete -f rest-review.yaml

We recommend users to "NOT" delete their clusters after their demo is complete as we have an automation system in place that will cleanup clusters and their associated NSX Objects

Appendix A: TestDrive PKS Architecture Diagram

This topology has the following characteristics:

  • PKS control plane (Ops Manager, BOSH Director, and PKS VM) components are using corporate routable IP addresses.
  • Kubernetes cluster master and worker nodes are located on a logical switch that has undergone Network Address Translation on a T0. This requires DNAT rules to allow access to Kubernetes APIs.
For Additional Support
Previous Article Getting Started & Logging In to VMware TKGI (formerly Enterprise PKS)
Next Article Deploying Multi-Tiered PKS Applications