33 Video Tutorial for Advanced Level Summary of Operations

33 Video Tutorial for Advanced Level Summary of Operations #

Hello, I’m Chrono.

During the “Advanced Practice” period, we have learned about API objects such as PersistentVolume, PersistentVolumeClaim, StatefulSet, etc., enabling us to deploy stateful applications. We have also learned various ways to manage and operate applications and clusters, including rolling updates, resource quotas, health checks and probes, namespaces, system monitoring, and so on.

With the knowledge you have gained, when you think back to the first class you took three months ago, did you realize that Kubernetes isn’t as complex and mysterious as you originally imagined?

Today is also the last regular class of our course, and we will continue to use videos to demonstrate some important parts of the “Advanced Practice” section. With the combination of previous text and images, you can deepen your understanding of Kubernetes.

Let’s begin our learning journey together.


  1. PV and PVC

First, let’s create a local storage volume, which is PV.

Under the “/tmp” directory on the Master and Worker nodes, create a directory called “host-10m-pv” to represent a storage device with a capacity of only 10MB:

mkdir /tmp/host-10m-pv

Next, we’ll use YAML to define this PV object:

vi host-path-pv.yml

Its kind is PersistentVolume, with the name “host-10m-pv”. The fields in the “spec” are all important and describe the basic information of the PV.

  • Since this PV is manually managed, you can choose any name for the “storageClassName”. Here I have written “host-test”.
  • “accessModes” define the access mode of the storage device, using the simplest “ReadWriteOnce”, which means it can be read and written, but can only be mounted by Pods on this node.
  • “capacity” defines the storage capacity. Since this is a test, we set it to 10MB. Note that the storage capacity is defined using international standard units and must be written in the form of Ki/Mi/Gi, otherwise an error will occur.
  • The last field, “hostPath”, specifies the local path of the storage volume, which is the directory we just created on the node, “/tmp/host-10m-pv/”.

Now let’s use kubectl apply to create the PV object:

kubectl apply -f host-path-pv.yml

Then use kubectl get to check the status:

kubectl get pv

You can see that Kubernetes already has this storage volume. Its capacity is 10MB, access mode is RWO, and the status is “Available”. The StorageClass is our own custom “host-test”.

Next, let’s define the PVC object:

vi host-path-pvc.yml

Its name is “host-5m-pvc”, and the “storageClassName” is “host-test”. The access mode is “ReadWriteOnce”, and in the “resources” field, we request to use 5MB of storage from Kubernetes.

PVC is relatively simple and does not include storage details like PV. We’ll create the object using kubectl apply:

kubectl apply -f host-path-pvc.yml

Then use kubectl get to check the status of PV and PVC:

kubectl get pv,pvc

You will see that the storage volume has been successfully allocated, and the status is “Bound”. Although the PVC requested 5MB, there is only one 10MB PV available in the system, so Kubernetes can only bind them together.

The next step is to mount this PVC into a Pod. Let’s take a look at this YAML file:

vi host-path-pod.yml

In the “volumes” section, we use “persistentVolumeClaim” to declare the name of the PVC as “host-5m-pvc”, which links the PVC and the Pod.

The “volumeMounts” section is to mount the storage volume, which you should be familiar with by now. Use “name” to specify the name of the volume and “path” to specify the path. Here, it is mounted to the “/tmp” directory in the container.

Now let’s create this Pod and check its status:

kubectl apply -f host-path-pod.yml
kubectl get pod -o wide

You will see that Kubernetes has scheduled it to a worker node. Let’s use kubectl exec to enter the container, execute a shell command, and generate a text file:

kubectl exec -it host-pvc-pod -- sh
echo aaa > /tmp/a.txt

Then, log in to the worker node and check the directory corresponding to the PV, “/tmp/host-10m-pv”:

cd /tmp/host-10m-pv
ls 
cat a.txt

The output content should be exactly what we generated in the container. This demonstrates that the Pod’s data has indeed been persisted to the storage device defined by the PV.


  1. NFS Network Storage Volume

Next, let’s take a look at how to use NFS network storage volumes in Kubernetes.

I will skip the installation process of the NFS server and client, as well as the NFS Provisioner. You can follow along with [Lesson 25], step by step.

After installation, you can check the running status of the Provisioner:

kubectl get deploy -n kube-system | grep nfs
kubectl get pod -n kube-system | grep nfs

**Note: Be sure to configure the IP address and directory in the NFS Provisioner correctly. If the address is incorrect or the directory does not exist, the Pod will not start properly, and you’ll need to use kubectl logs to check the error message. Let’s take a look at the default StorageClass definition for NFS:

vi class.yaml

Its name is “nfs-client”, which is crucial. We must include it in the PVC so that the NFS Provisioner can find it.

Now let’s look at the definitions for PVC and Pod:

vi nfs-dynamic-pv.yml

This PVC requests 10MB of storage and uses the access mode “ReadWriteMany”. This is because NFS is a network-shared storage that supports multiple Pods mounting simultaneously.

In the Pod, we still declare the PVC using “persistentVolumeClaim” and mount the storage volume using “volumeMounts” to the “/tmp” directory inside the container.

Apply this YAML using kubectl apply to create the PVC and Pod, and check the PV and PVC in the cluster using kubectl get pv,pvc:

kubectl get pv,pvc

You will see that the NFS Provisioner automatically creates a 10MB PV for us.

Let’s also check the shared directory on the NFS server. We will find the storage directory for PV, with the first part of the name being the default namespace and the second part being the name of the PVC. Create a text file in this directory:

echo aaa > a.txt

Then, use kubectl exec to enter the Pod and check the contents of the “/tmp” directory:

kubectl exec -it nfs-dyn-pod -- sh
cd /tmp
ls
cat a.txt

You will see that the file from the NFS disk is also present in the container, indicating that the network storage volume is shared.

  1. Creating objects that use NFS

After understanding how to use PV, PVC, and NFS, we can now experiment with using StatefulSet. Let’s create an object that uses NFS storage.

Take a look at the YAML description file for the StatefulSet object:

vi redis-pv-sts.yml

Two important fields are:

  • “serviceName” specifies that the StatefulSet’s service name is “redis-pv-svc”, which must match the Service object defined later.
  • “volumeClaimTemplates” embeds the PVC template directly into the object. The storageClass is still “nfs-client”, and it requests a storage capacity of 100MB.
  • The remaining fields are similar to Deployment, including replicas, selector, and containers.

Under the StatefulSet object is its associated Service definition. The key is to set “clusterIP: None” to not allocate a virtual IP address for the Service. This means it is a “Headless Service”. External access to the StatefulSet does not go through the Service for forwarding but directly accesses the Pod.

Create these two objects using kubectl apply to run the “stateful application”:

kubectl apply -f redis-pv-sts.yml

Use kubectl get to check the status of the StatefulSet object:

kubectl get sts,pod

The names of these two stateful Pods are sequentially numbered, starting from 0 and named as redis-pv-sts-0 and redis-pv-sts-1. The Pods start in the order of 0, and only after the 0th Pod successfully starts, the 1st Pod will start.

Now, use kubectl exec to enter the Pod:

kubectl exec -it redis-pv-sts-0 -- sh

Check its hostname, which should be the same as the Pod name:

hostname

Next, test the individual domain names of the two Pods, both should be accessible:

ping redis-pv-sts-0.redis-svc
ping redis-pv-sts-1.redis-svc

To check the status of the storage volumes associated with the StatefulSet, use the command kubectl get pvc:

kubectl get pv,pvc

You will see that the StatefulSet automatically creates two PVs and binds them using the PVC template.

To verify the persistence of the storage, use kubectl exec to run the Redis client and add some KV data:

kubectl exec -it redis-pv-sts-0 -- redis-cli
set a 111
set b 222
quit

Then, delete this Pod:

kubectl delete pod redis-pv-sts-0

The StatefulSet monitors the status of the Pod and will restart it if the count is incorrect. Use the Redis client to check:

kubectl exec -it redis-pv-sts-0 -- redis-cli
get a
get b
keys *

You will see that the Pod mounts the original storage volume and automatically recovers the previously added Key-Value data.

  1. Rolling Update –

Now let’s learn about the usage of rolling updates in Kubernetes.

Here is a V1 version of the Nginx application:

vi ngx-v1.yml

Note that in the “annotations” section we use the field “kubernetes.io/change-cause” to indicate the version information “v1, ngx=1.21” and the image used is “nginx:1.21-alpine”.

There is also a corresponding Service, which is a NodePort type, so I won’t explain it further.

Next, we deploy this application using the kubectl apply command:

kubectl apply -f ngx-v1.yml

Use curl to send an HTTP request and check its running information:

curl 192.168.10.210:30080

From the output of the curl command, we can see that the current version of the application is “1.21.6”.

Execute kubectl get pod to see the Hash value after the name, which represents the version of the Pod.

Now let’s take a look at the second version of the object “ngx-v2.yml”:

vi ngx-v2.yml

In the “annotations” section, it also indicates the version information, and the image is upgraded to “nginx:1.22-alpine”. It also adds a field “minReadySeconds” to facilitate observing the application update process.

Now we apply the changes by executing the kubectl apply command. Since we changed the image name and the Pod template has changed, it will trigger a “version update”. You can use kubectl rollout status to check the status of the application update:

kubectl apply -f ngx-v2.yml
kubectl rollout status deploy ngx-dep

After that, execute kubectl get pod and you will see that all Pods have been replaced with the new version. When you curl to access Nginx, the output information will also change to “1.22.0”:

kubectl get pod
curl 192.168.10.210:30080

The kubectl describe command can provide a clearer view of the changes made to the Pods. It shows two synchronized scaling actions: scaling up and scaling down.

kubectl describe deploy ngx-dep

Next, let’s check the update history using the kubectl rollout history command:

kubectl rollout history deploy ngx-dep

You can see the update information of the two versions in the “CHANGE-CAUSE” column.

Finally, we can use kubectl rollout undo to roll back to the previous version:

kubectl rollout undo deploy ngx-dep

Check the update history again with kubectl rollout history, and you will find that it has been restored to the initial version. You can also use curl to send a request and verify it:

curl 192.168.10.210:30080

Nginx has reverted to version 1.21.6, indicating that the version rollback was successful.

  1. Horizontal Scaling –

Next, let’s take a look at Kubernetes’ horizontal autoscaling feature, which is achieved through the “HorizontalPodAutoscaler” object.

The horizontal autoscaling feature requires the installation of the Metrics Server plugin. I won’t demonstrate the installation process, but let’s see its running status using the kubectl get pod command:

kubectl get pod -n kube-system

You should see a Metrics Server Pod running.

Next, let’s check the current system metrics:

kubectl top node
kubectl top pod -n kube-system

Make sure the Metrics Server is running properly. Now we can try out the horizontal autoscaling feature.

First, let’s define a Deployment object to deploy one instance of Nginx:

vi ngx-hpa-dep.yml

Note that in the YAML file, we must explicitly specify the “resources” field to define its resource allocation. Otherwise, the HorizontalPodAutoscaler won’t be able to retrieve the Pod metrics and automatic scaling won’t work.

After creating the object using kubectl apply, you can use kubectl get pod to see that there is currently only one instance.

The HPA object is simple. It specifies that the maximum number of Pods is 10, the minimum number is 2, and the CPU utilization metric is set to 5%. Create the HPA using the kubectl apply command. It will find that there is only one Nginx instance, which does not meet the lower limit requirement, so it will scale up to two instances:

kubectl get pod
kubectl get hpa

Next, let’s launch an http Pod and use the load testing tool “ab” inside it to apply traffic pressure to Nginx:

kubectl run test -it --image=httpd:alpine -- sh

Send one million requests to Nginx for 30 seconds, and then use kubectl get hpa to observe the status of the HorizontalPodAutoscaler:

ab -c 10 -t 30 -n 1000000 'http://ngx-hpa-svc/'

kubectl get hpa

You can see that the HPA continuously monitors the CPU usage of the Pod through the Metrics Server. When it exceeds the set value, it starts scaling up until it reaches the maximum number.

  1. Prometheus ————

Let’s take a look at the second CNCF project - Prometheus.

First, download the source code package from GitHub. The latest version is 0.11. Then, unzip it to get the YAML files required for deployment.

Next, modify the prometheus-service.yaml and grafana-service.yaml files to change the Service type to NodePort, so that we can directly access them using the node’s IP address. For convenience, we also specify the node ports for Prometheus as “30090” and for Grafana as “30030”.

Remember to modify the kubeStateMetrics-deployment.yaml and prometheusAdapter-deployment.yaml files because the images in them are difficult to pull from gcr.io. It will be easier if you change them to the ones on Docker Hub.

After making the modifications, execute two kubectl create commands to deploy Prometheus. First, create the necessary objects such as the namespace in the “manifests/setup” directory, and then create the actual objects in the “manifests” directory:

kubectl create -f manifests/setup
kubectl create -f manifests

The Prometheus objects are all in the “monitoring” namespace. You can use kubectl get command to check their status:

kubectl get pod -n monitoring

Let’s also take a look at the Service objects:

kubectl get svc -n monitoring

The ports are set to “30090” and “30030” as we specified earlier.

After Prometheus is up and running, open a web browser and enter the node’s IP address followed by the port number “30090” to access the Prometheus web interface.

You can select any metric in the query field or use PromQL to edit expressions and generate visual charts. For example, the metric “node_memory_Active_bytes” shows the current memory usage.

Now let’s move on to Grafana. Access the port “30030” of the node to go to the Grafana login page. The default username and password are both “admin”.

Grafana comes with a variety of pre-built dashboards. You can browse and choose them in the “Dashboards - Browse” menu. For example, select the dashboard “Kubernetes / Compute Resources / Namespace (Pods)”.

  1. Dashboard ————

Now let’s deploy the Dashboard plugin in the Kubernetes cluster.

We will use version 2.6.0, which has only one YAML file. Let’s take a quick look:

vi dashboard.yaml
  • All objects belong to the “kubernetes-dashboard” namespace.
  • The Service object uses port 443, which maps to port 8443 of the Dashboard.
  • The Dashboard is deployed as a single instance using a Deployment, with port number 8443.
  • The container enables Liveness probe to check its liveliness using HTTPS.

You can deploy the Dashboard with a single command using kubectl apply:

kubectl apply -f dashboard.yaml
kubectl get pod -n kubernetes-dashboard

Next, let’s configure an Ingress entry for the Dashboard to access it via reverse proxy.

First, generate a self-signed certificate using the openssl tool. Then convert the generated certificate and private key to Secret objects. To save typing, the commands are written as a script file.

openssl req -x509 -days 365 -out k8s.test.crt -keyout k8s.test.key \
  -newkey rsa:2048 -nodes -sha256 \
    -subj '/CN=k8s.test' -extensions EXT -config <( \
       printf "[dn]\nCN=k8s.test\n[req]\ndistinguished_name = dn\n[EXT]\nsubjectAltName=DNS:k8s.test\nkeyUsage=digitalSignature\nextendedKeyUsage=serverAuth")

export out="--dry-run=client -o yaml"
kubectl create secret tls dash-tls -n kubernetes-dashboard --cert=k8s.test.crt --key=k8s.test.key $out > cert.yml

The certificate has a validity period of 365 days, uses RSA 2048-bit private key, and SHA256 algorithm. The Secret object is named “dash-tls”.

Now, let’s define the Ingress Class and Ingress:

vi ingress.yml

Note that both are in the “kubernetes-dashboard” namespace. In the Ingress, in the “annotations” field, specify that the backend target is an HTTPS service, and the “tls” field specifies the domain name as “k8s.test” and the certificate Secret object as “dash-tls”.

Next, define the Ingress Controller. The image used is “nginx-ingress:2.2-alpine”. Make sure to set the Ingress Class in the “args” field to “dash-ink”, and also change the Service object to NodePort with port number 30443.

Finally, let’s create a user, admin-user, for the Dashboard:

vi admin.yml

With all the YAML files prepared, let’s create the objects one by one using kubectl apply:

kubectl apply -f cert.yml
kubectl apply -f ingress.yml
kubectl apply -f kic.yml
kubectl apply -f admin.yml

Before accessing the Dashboard in the browser, we need to get the user’s token, which is stored as a Secret:

kubectl get secret -n kubernetes-dashboard
kubectl describe secrets -n kubernetes-dashboard admin-user-token-xxxx

Copy this token, make sure the domain name “k8s.test” can be resolved, and enter the URL “https://k8s.test:30443” in the browser to log in to the Dashboard.

Homework #

I wonder if today’s reference video has helped you solve some practical problems. If you have successfully completed all the projects, feel free to share your experiences and new ideas in the comments section. If you encounter any difficulties, feel free to describe them clearly and post them here. Let’s discuss together.