23 Video Tutorial for Intermediate Level Summary of Operations

23 Video Tutorial for Intermediate Level Summary of Operations #

Hello, I’m Chrono.

Today we’re concluding our study of the “Intermediate Level” section. During this time, we’ve used kubeadm to build a multi-node Kubernetes cluster that closely resembles a production environment. We’ve also learned about four important API objects: Deployment, DaemonSet, Service, and Ingress.

This video serves as a summary and review of the “Intermediate Level” section. I will demonstrate the practical implementation of various operations covered in previous lessons. This will allow you to follow along with the context of the operations and learn more effectively.

First, I will show you the complete process of building a cluster using kubeadm. Then, I will demonstrate how to write YAML files and use them to create Deployment, DaemonSet, Service, and Ingress objects. Finally, we will use these objects to build a WordPress website.

Let’s begin with the video course.

Note: The video is a recording of my actual operations. Some package installations may take a while, so there may be a few seconds of inactivity on the screen. Please be patient when referring to the video or performing your own operations.


Part 1: Installing kubeadm #

We will start by installing the master node. Before proceeding, make sure you have already installed Docker (installation instructions can be found in Lesson 1).

First, we need to perform four preparatory steps: changing the hostname, modifying Docker configurations, configuring the network settings, and disabling swap.

To change the hostname, modify the /etc/hostname file and change it to “master”:

sudo vi /etc/hostname

To modify the Docker configuration file, change the cgroup driver to “systemd”. Then modify the iptables configuration to enable the br_netfilter module. Lastly, update the “/etc/fstab” file to disable Linux swap.

These steps are already covered in [Lesson 17]. To simplify the process, I have created a script file that automates these operations. Take a look at it:

vi prepare.sh

The first part of the script modifies the Docker configuration and restarts Docker. The second part modifies the iptables configuration, and the third part disables Linux swap. By running this script, we will perform the necessary preparations for installation.

Next, we need to download the kubeadm executable. Once again, I have created a convenient script file for this:

vi admin.sh

The basic process follows the official documentation of Kubernetes, but uses software mirrors from domestic sources.

Note: When using apt install, it is important to specify the version explicitly. In this case, we are using Kubernetes version 1.23.3. If no version is specified, the latest version (1.24) will be installed.

After the installation is complete, we can use kubeadm version and kubectl version --client to verify that the correct versions have been installed:

kubeadm version
kubectl version --client

As you can see, the installed version matches the one we specified (1.23.3).


Part 2: Installing Kubernetes #

With kubeadm installed, we can now proceed with the installation of Kubernetes. During the installation process, we need to pull images from gcr.io. I have already downloaded these images from a domestic mirror website. Let me show you the list of images:

docker images

These images include the core components of Kubernetes, such as etcd, apiserver, and controller-manager.

Now, let’s run kubeadm init to initialize the master node. Take a look at the script file:

vi master.sh

During initialization, I am using the pod-network-cidr parameter to specify the IP address range for Pods as “10.10.0.0/16”, and I am also specifying Kubernetes version 1.23.3.

To ensure that kubectl works properly after the installation, the script copies the Kubernetes configuration file to the “.kube” directory of the current user.

Now, let’s run this script.

After the master node is initialized, kubeadm will provide some important information. One is the command to join worker nodes to the cluster using kubeadm join. Be sure to save this command. We can create a new file, such as k.txt, and save it there.

Now, let’s use kubectl version and kubectl get node to check the Kubernetes version and the status of the cluster nodes:

kubectl version
kubectl get node

The status of the master node will be “NotReady”. Next, we need to install the Flannel network plugin.

Installing Flannel is simple. Just remember to modify its YAML file, “net-conf.json”, to match the network range specified in kubeadm init:

vi flannel.yml

Now, let’s use kubectl apply to install the Flannel network:

kubectl apply -f kube-flannel.yml

Wait a moment, then execute kubectl get node to check the node status:

kubectl get node

You should see that the status of the master node is now “Ready”, indicating that the node’s network is working properly.


Part 3: Deploying the Kubernetes Cluster #

The installation process for worker nodes is similar to that of the master node. We need to change the hostname, perform preparations using prepare.sh, modify Docker configuration file, iptables configuration, and disable Linux swap. Then, we can download the kubeadm executable and Kubernetes images.

These steps are identical to those on the master node, so I won’t show the script file this time. Let’s just run it. With kubeadm, since this is a Worker node, we don’t need to execute kubeadm init, instead we need to execute kubeadm join, which is the command that was copied during the installation of the Master node. It will automatically connect to the Master node, pull the images, install the network plugin, and finally join the node to the cluster.

After the Worker node is installed, let’s check the status of the nodes in the console by executing kubectl get node, and we will see that both nodes are “Ready”.

Now let’s use kubectl run to run Nginx for testing:

kubectl run ngx --image=nginx:alpine
kubectl get pod -o wide

We will see that the Pod is running on the Worker node, indicating that our Kubernetes multi-node cluster deployment is successful.


IV. Usage of Deployment #

Next, let’s take a look at how to use Deployment.

First, use kubectl api-resources to view the basic information of Deployment:

kubectl api-resources | grep deploy

We can see that the shorthand for Deployment is “deploy”, its apiVersion is “apps/v1”, and the kind is “Deployment”.

Then we execute kubectl create to let Kubernetes automatically generate the Deployment template file for us.

First, define an environment variable “out”:

export out="--dry-run=client -o yaml"

Then create an object named “ngx-dep” using the image “nginx:alpine”:

kubectl create deploy ngx-dep --image=nginx:alpine $out

We save this template into a file called ngx.yml:

kubectl create deploy ngx-dep --image=nginx:alpine $out > deploy.yml

Here, we can delete some unnecessary fields to make the YAML look cleaner, and then change the replicas to 2, meaning to start two Nginx Pods.

After the Deployment YAML is written, we can use kubectl apply to create the object:

kubectl apply -f deploy.yml

Use the kubectl get command to check the status of the Deployment:

kubectl get deploy
kubectl get pod

Finally, let’s test the scaling feature of the Deployment. Use the kubectl scale command to change the number of Pods to 5:

kubectl scale --replicas=5 deploy ngx-dep

When we use the kubectl get command again, we will see that the Pods have successfully changed to 5 replicas:

kubectl get pod

Finally, delete this Deployment:

kubectl delete deploy ngx-dep

V. Usage of DaemonSet #

After learning about Deployment, let me demonstrate the usage of DaemonSet.

Since we cannot directly generate the template file for DaemonSet using kubectl create, but the overall structure is similar to Deployment, we can generate a Deployment template first and then modify a few fields.

Here, I used the commonly used tool “sed” in Linux to directly replace the name in the Deployment, and then delete the replicas field. This way, the template file for DaemonSet is automatically generated:

kubectl create deploy redis-ds --image=redis:5-alpine $out \
  | sed 's/Deployment/DaemonSet/g' - \
  | sed -e '/replicas/d' -

Since this template file is derived from Deployment, it won’t have the tolerations field and cannot run on the Master node. It needs to be manually added.

The following is the complete DaemonSet YAML description file that has been modified:

vi ds.yml

Pay attention to the tolerations field inside it, which tolerates the taint of node-role.kubernetes.io/master:NoSchedule on the nodes, meaning it can run on the Master node.

Now, let’s deploy this DaemonSet with “tolerations”:

kubectl apply -f ds.yml

Use kubectl get to check the status of the object:

kubectl get ds
kubectl get pod -o wide

We can see that this Redis DaemonSet is running on both the Master and Worker nodes. To delete the DaemonSet, run the following command:

kubectl delete -f ds.yml

6. Using Services #

Now let’s take a look at the Service object, which is the load balancing object in Kubernetes.

Because the Service object serves the Pod, Deployment, and other objects, we need to create a Deployment before creating the Service. Run the following command to create the Deployment:

kubectl apply -f deploy.yml

This Deployment manages two Nginx Pods. To see the Pods, run the following command:

kubectl get pod -o wide

Next, we’ll create the Service template file using the kubectl expose command:

kubectl expose deploy ngx-dep --port=80 --target-port=80 $out

After making the necessary changes, the service file is svc.yml. Use the kubectl apply command to create the Service object:

kubectl apply -f svc.yml

To list the Service objects and see their virtual IP addresses, run:

kubectl get svc

To see which backend Pods the Service is proxying to, use the kubectl describe command:

kubectl describe svc ngx-svc

To verify if the Service is correctly proxying the Nginx Pods, run:

kubectl get pod -o wide

Now, let’s use the kubectl exec command to enter a Pod and test the domain functionality of the Service:

kubectl exec -it ngx-dep-6796688696-4h6lb -- sh

Using curl with the domain name “ngx-svc” (which is the name of the Service object), run:

curl ngx-svc

After running it multiple times, you will see that the Service object implements load balancing by distributing the traffic to different Pods.

We can also try different domain name variants for the Service, such as adding the namespace:

curl ngx-svc.default
curl ngx-svc.default.svc.cluster.local

Finally, let’s look at how to expose the Service using the NodePort method. To see the Service object, run:

kubectl get svc

In the PORT column, you will see that it has assigned a random port number, 31980. By accessing any node in the cluster with this port number, you can access the Service object and the Pods behind it.

Let’s give it a try. Note that 210 is the Master node and 220 is the Worker node:

curl 192.168.10.210:31980
curl 192.168.10.220:31980

Finally, delete the Deployment and Service objects:

kubectl delete -f deploy.yml
kubectl delete -f svc.yml

7. Using Ingress #

After learning about Services, let’s now take a look at the Ingress object, which manages incoming cluster traffic.

We’ll use the Ingress Controller developed by Nginx. To set it up, we need to create a namespace, RBAC, and other relevant resources as specified in its documentation. Here, I’ll use a simple script to accomplish that:

cat setup.sh
./setup.sh

You can verify that a new namespace, nginx-ingress, has been created by running the command kubectl get ns.

To test and validate the usage of Ingress and the Ingress controller, we still need to create the Deployment and Service objects:

kubectl apply -f deploy.yml
kubectl apply -f svc.yml

Let’s take a look at the definition of the Ingress:

vi ingress.yml

This YAML file contains two API objects. The first one is an Ingress Class named “ngx-ing”. Note that in the spec section, the controller should be set as the Nginx Ingress Controller.

The second object is the Ingress rule object. I added an annotation nginx.org/lb-method to specify the use of the Round-Robin load balancing algorithm. Then there is a crucial field ingressClassName to associate the Ingress and Ingress Class.

The “rules” section defines the specific routing rules, which can be quite complex. It includes specifying the host, path, and the Service to which the backend should forward. It is recommended to use kubectl create to generate these rules automatically to avoid mistakes.

Let’s next take a look at the definition of the Ingress Controller in the KIC YAML file:

vi kic.yml

The Ingress Controller is actually modified from the official Nginx example file. Therefore, we only need to pay attention to a few areas.

The first one is the image, where I changed it to a more streamlined Alpine version. The second one is the startup parameter “args”, which must include -ingress-class to associate it with the Ingress Class object we created earlier. Otherwise, the Ingress Controller won’t be able to find the Ingress routing rules.

There are a few more parameters following it, such as -health-status and -ready-status. You can refer to the official documentation to understand their functions.

Now, let’s apply these two YAML files to create the Ingress objects:

kubectl apply -f ingress.yml
kubectl apply -f kic.yml

Use kubectl get to check the status of these objects one by one:

kubectl get ingressclass
kubectl get ing
kubectl describe ing ngx-ing

kubectl get deploy -n nginx-ingress
kubectl get pod -n nginx-ingress

Make sure they are all working properly. Let’s perform a test by mapping the local port 8080 to port 80 of the Ingress Controller Pod:

kubectl port-forward -n nginx-ingress ngx-kic-dep-8859b7b86-cplgp 8080:80 &

Since we set the routing rule in the Ingress as ngx.test domain, we need to use the curl command with the --resolve parameter to force resolving it to 127.0.0.1:

curl --resolve ngx.test:8080:127.0.0.1 http://ngx.test:8080

Execute it multiple times, and you will notice that the Nginx Ingress Controller forwards the request to different backend Pods based on the domain routing rules.

Finally, let’s delete these Deployments, Services, Ingress objects that we created earlier:

kubectl delete -f deploy.yml
kubectl delete -f svc.yml
kubectl delete -f ingress.yml
kubectl delete -f kic.yml

Section 8: Setting up WordPress Website #

Let’s now set up a WordPress website and practically work with Deployment, Service, and Ingress objects.

The first step is to deploy MariaDB:

wp-maria.yml

The ConfigMap remains the same, with the environment variables “DATABASE”, “USER”, and “PASSWORD”. The deployment method has changed to a Deployment. For simplicity, only one instance is used. We then define a Service object for it, allowing us to access the database using a domain name instead of an IP address.

The second step is to deploy the WordPress application:

vi wp-dep.yml

Note that in the ConfigMap, we no longer use a fixed IP address; instead, we use the domain provided by the Service, “maria-svc”. In the Deployment, we set the WordPress instances to 2 for redundancy and improved availability. We also define a Service object for it, specifying the NodePort mode, and using port 30088.

The third step is to deploy the Ingress:

vi wp-ing.yml

The definition of the Ingress is similar to the previous one, but the Ingress Class name is changed to “wp-ing”. The host field in the Ingress routing is changed to “wp.test”.

There are not many changes in the Ingress Controller:

vi wp-kic.yml

The key is still the “args” parameter, where we must match it with the Ingress Class, i.e., “wp-ing”. Also, the field “hostNetwork: true” is added, allowing the Pod to use the host’s network directly.

After reading through these YAML files, let’s use kubectl apply to create the objects:

kubectl apply -f wp-maria.yml
kubectl apply -f wp-dep.yml
kubectl apply -f wp-ing.yml
kubectl apply -f wp-kic.yml

After creation, let’s use kubectl get to check the status of the objects:

kubectl get deploy
kubectl get svc
kubectl get pod -n nginx-ingress

Now, let’s go outside the cluster. Assuming you have modified the local hosts file to resolve the domain “wp.test” to the Kubernetes nodes, you can directly enter “http://wp.test” in the browser to access the Nginx Ingress Controller and then access the WordPress website.

Homework #

If you encounter any difficulties during the operation, feel free to leave a message in the comment section. Remember to clearly describe your question so that I and other classmates can have a detailed discussion about the problem.

I hope you have gained something during this period of study. The next class will be the final advanced section. See you in the next class.