22 Practical Exercises With Kubernetes 2

22 Practical Exercises with Kubernetes 2 #

Hello, I am Chrono.

Our “Intermediate Series” is coming to an end today. Thank you for your perseverance in learning during this period.

As a closing course for the “Intermediate Series,” as usual, I will provide a comprehensive review and summary of the content we have learned so far to connect the knowledge points and enhance your understanding of them.

First, let’s organize the key points of Kubernetes knowledge covered in the “Intermediate Series,” and then we will demonstrate the practical exercise of building a WordPress website. This time, we will make progress compared to the previous two exercises; we won’t use Docker or bare Pods, but instead, we will utilize the objects we recently learned, such as Deployment, Service, Ingress, and more.

Review of Kubernetes Key Points #

Kubernetes is the operating system of the cloud-native era. It can manage clusters consisting of a large number of nodes, pooling computing resources and automatically scheduling and operating various forms of applications.

Building a multi-node Kubernetes cluster is quite challenging. Fortunately, tools like kubeadm have emerged in the community, which allow for a “one-click operation” to build a production-level cluster from scratch using commands such as kubeadm init and kubeadm join (Lecture 17).

kubeadm encapsulates Kubernetes components using container technology. As long as a container runtime, such as Docker or containerd, is installed on the node, it can automatically pull images from the Internet and run components in containers, making it simple and convenient.

In this Kubernetes cluster that is closer to actual production environment, we have learned about Deployment, DaemonSet, Service, Ingress, and Ingress Controller API objects (Lecture 18).

Deployment is an object used to manage Pods. It represents the most common type of online business operations, deploying multiple instances of an application in the cluster, and effortlessly scaling the number of instances to cope with traffic pressure.

There are two key fields in the definition of Deployment. One is replicas, which specifies the number of instances. The other is selector, which is used to “select” Pods managed by the Deployment. This flexible association mechanism achieves loose coupling between API objects.

DaemonSet is another way to deploy online businesses. It is similar to Deployment, but runs a Pod instance on each node in the cluster, similar to a “daemon process” in a Linux system. It is suitable for applications such as logging and monitoring.

The key concept of DaemonSet’s ability to deploy Pods anywhere is “taint” and “toleration”. Nodes have various “taints”, and Pods can use “tolerations” to ignore these “taints”, allowing for adjusting the deployment strategy of Pods in the cluster.

Pods deployed by Deployment and DaemonSet are in a “dynamic balance” state in the cluster, where the total number remains constant but temporary destruction and re-creation are possible, resulting in changing IP addresses, which poses challenges for applications using microservice architecture.

Service is an abstraction of Pod IP addresses. It possesses a fixed IP address and uses iptables rules to load balance traffic to the Pods behind it. The kube-proxy component on the node maintains the status of the proxied Pods in real-time, ensuring that Service only forwards traffic to healthy Pods.

Service also supports domain names through DNS plugins, so clients no longer need to be concerned with the specific situation of Pods. They can access the services provided by Pods through this stable intermediate layer, Service.

Service provides load balancing at the layer four. However, most applications nowadays use the HTTP/HTTPS protocols, so to achieve layer seven load balancing, the Ingress object needs to be used (Lecture 21).

Ingress defines routing rules based on the HTTP protocol. However, to make the rules take effect, coordination between an Ingress Controller and Ingress Class is required.

  • The Ingress Controller is the actual entrance of the cluster, responsible for scheduling and distributing traffic based on Ingress rules. It can also act as a reverse proxy, providing more features such as security protection and TLS offloading.
  • The Ingress Class is used to manage the concepts of Ingress and Ingress Controller, making it easier to group routing rules and reduce maintenance costs.

However, the Ingress Controller itself is also a Pod, so to expose services outside the cluster, Service still needs to be relied upon. Service supports NodePort, LoadBalancer, and other methods, but NodePort has limited port range, and LoadBalancer relies on cloud service providers, both of which are not very flexible.

The compromise is to use a small number of NodePort to expose the Ingress Controller, use Ingress to route to internal services, and employ a reverse proxy or LoadBalancer to bring in traffic from outside.

Basic Architecture of a WordPress Website #

A brief review of these API objects in Kubernetes, let’s use them to build a WordPress website and deepen our understanding through practice.

Now that we have mastered concepts like Deployment, Service, and Ingress on top of Pods, the website naturally undergoes some changes. The architecture diagram is shown below:

Image

This deployment differs slightly from Docker and Minikube. The key point is that we have completely abandoned Docker and run all applications in the Kubernetes cluster. The deployment method is no longer bare Pods, but using Deployment, which greatly improves stability.

The original role of Nginx was to act as a reverse proxy. In Kubernetes, it has been upgraded to an Ingress Controller with the same function. WordPress, which originally had only one instance, is now two instances (you can scale horizontally as you like), which greatly improves availability. And for MariaDB, we currently have one instance to ensure data consistency.

Also, since Kubernetes has a built-in service discovery mechanism, Service, we no longer need to manually check the IP addresses of Pods. We just need to define Service objects for them and use domain names to access services like MariaDB and WordPress.

There are two ways to provide services to the outside world for the website.

One way is to expose the WordPress Service object directly to the outside through NodePort, which is convenient for testing. Another way is to add the “hostNetwork” attribute to the Nginx Ingress Controller, directly using the port number on the node, similar to Docker’s host network mode. The advantage is that it can bypass the port range limit of NodePort.

Now let’s gradually build the new version of the WordPress website according to this basic architecture and write YAML declarations.

Here’s a little trick. When you’re actually working, remember to make good use of kubectl create and kubectl expose to create template files, which saves time and avoids low-level formatting errors.

  1. Deploying MariaDB for the WordPress website ————————

First, let’s deploy MariaDB, which is similar to what we did in [Lesson 15].

First, use a ConfigMap to define the environment variables for the database, including DATABASE, USER, PASSWORD, and ROOT_PASSWORD:

apiVersion: v1
kind: ConfigMap
metadata:
  name: maria-cm

data:
  DATABASE: 'db'
  USER: 'wp'
  PASSWORD: '123'
  ROOT_PASSWORD: '123'

Then, we need to change MariaDB from Pod to Deployment, set replicas to 1, and there is no change in the Pod part inside the template. We still need to inject the configuration information as environment variables into the Pod using envFrom, which is like putting a Pod into a Deployment “shell”:

apiVersion: apps/v1
kind: Deployment
metadata:
  labels:
    app: maria-dep
  name: maria-dep

spec:
  replicas: 1
  selector:
    matchLabels:
      app: maria-dep

  template:
    metadata:
      labels:
        app: maria-dep
    spec:
      containers:
      - image: mariadb:10
        name: mariadb
        ports:
        - containerPort: 3306

        envFrom:
        - prefix: 'MARIADB_'
          configMapRef:
            name: maria-cm

We also need to define a Service object for MariaDB, map port 3306, so that other applications no longer need to worry about IP addresses and can directly access the database service using the name of the Service object:

apiVersion: v1
kind: Service
metadata:
  labels:
    app: maria-dep
  name: maria-svc

spec:
  ports:
  - port: 3306
    protocol: TCP
    targetPort: 3306
  selector:
    app: maria-dep

Because these three objects are all related to the database, they can be written in one YAML file, with “—” separating the objects. By using kubectl apply, they can be created at once:

kubectl apply -f wp-maria.yml

After executing the command, you should use kubectl get to check whether the objects have been created successfully and are running properly:

Image

  1. Deploying WordPress for the WordPress website ————————–

The second step is to deploy the WordPress application.

Since we created the Service for MariaDB just now, when writing the ConfigMap configuration, the “HOST” should no longer be an IP address, but a DNS domain name, which is the name of the Service itself maria-svc. This point needs special attention:

apiVersion: v1
kind: ConfigMap
metadata:
  name: wp-cm

data:
  HOST: 'maria-svc'
  USER: 'wp'
  PASSWORD: '123'
  NAME: 'db'

The way to write the Deployment for WordPress is the same as that for MariaDB. It just needs to put the Pod into a Deployment “shell”, set replicas to 2, and configure environment variables using “envFrom”:

apiVersion: apps/v1
kind: Deployment
metadata:
  labels:
    app: wp-dep
  name: wp-dep

spec:
  replicas: 2
  selector:
    matchLabels:
      app: wp-dep

  template:
    metadata:
      labels:
        app: wp-dep
    spec:
      containers:
      - image: wordpress:5
        name: wordpress
        ports:
  • containerPort: 80

envFrom:

  • prefix: ‘WORDPRESS_DB_’ configMapRef: name: wp-cm

Next, we still need to create a Service object for WordPress. Here I use the “NodePort” type and manually specify the port number “30088” (which must be between 30000 and 32767):

apiVersion: v1
kind: Service
metadata:
  labels:
    app: wp-dep
  name: wp-svc

spec:
  ports:
  - name: http80
    port: 80
    protocol: TCP
    targetPort: 80
    nodePort: 30088

  selector:
    app: wp-dep
  type: NodePort

Now let’s deploy WordPress using kubectl apply:

kubectl apply -f wp-dep.yml

The status of these objects can be seen in the screenshot below:

Image

Since the Service object for WordPress is of type NodePort, we can access the WordPress service on each node in the cluster.

For example, if the IP address of a node is “192.168.10.210”, you can enter “http://192.168.10.210:30088” in the browser’s address bar, where “30088” is the node port number specified in the Service. Then you will be able to see the WordPress installation page.

Image

  1. Deploying WordPress Website with Nginx Ingress Controller

Now that MariaDB and WordPress have been successfully deployed, the third step is to deploy the Nginx Ingress Controller.

First, we need to define the Ingress Class, named “wp-ink”, which is very simple:

apiVersion: networking.k8s.io/v1
kind: IngressClass
metadata:
  name: wp-ink

spec:
  controller: nginx.org/ingress-controller

Then, use the kubectl create command to generate a template file for the Ingress, specifying the domain name as “wp.test”, the backend Service as “wp-svc:80”, and the Ingress Class as the previously defined “wp-ink”:

kubectl create ing wp-ing --rule="wp.test/=wp-svc:80" --class=wp-ink $out

The resulting Ingress YAML looks like this, with the path type still set to “Prefix”:

apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
  name: wp-ing

spec:
  ingressClassName: wp-ink

  rules:
  - host: wp.test
    http:
      paths:
      - path: /
        pathType: Prefix
        backend:
          service:
            name: wp-svc
            port:
              number: 80

Next, the most important Ingress Controller object, which still needs modification from the Nginx project’s example YAML, needs to change the name, labels, and the Ingress Class in the parameters.

As I mentioned when discussing the basic architecture, this Ingress Controller does not use a Service. Instead, a special field hostNetwork is added to its Pod to allow the Pod to use the host network, which is another form of NodePort:

apiVersion: apps/v1
kind: Deployment
metadata:
  name: wp-kic-dep
  namespace: nginx-ingress

spec:
  replicas: 1
  selector:
    matchLabels:
      app: wp-kic-dep

  template:
    metadata:
      labels:
        app: wp-kic-dep

    spec:
      serviceAccountName: nginx-ingress

      # use host network
      hostNetwork: true

      containers:
      ...

With the Ingress resources ready, we create these objects:

kubectl apply -f wp-ing.yml -f wp-kic.yml

Image

Now all the applications have been deployed, and you can access the website from outside the cluster to verify the result.

However, please note that Ingress uses HTTP routing rules, so accessing it with an IP address is invalid. Therefore, on a host outside the cluster, you must be able to recognize our “wp.test” domain name, which means you need to resolve the domain name “wp.test” to the node where the Ingress Controller is located.

If you are using Mac, modify /etc/hosts; if you are using Windows, modify C:\Windows\System32\Drivers\etc\hosts, and add a resolution rule:

cat /etc/hosts
192.168.10.210 wp.test

With the domain name resolution, you don’t need to use the IP address in the browser, simply use the domain name “wp.test” to access our WordPress website through the Ingress Controller:

Image

With this, our work on deploying the WordPress website on Kubernetes is now complete.

Summary #

In this lesson, we reviewed some key points from the “Intermediate” section. I have summarized them in a mind map for you to review and consolidate your learning outcomes after class.

Image

Today, we rebuilt the WordPress website in the Kubernetes cluster, and applied new objects such as Deployment, Service, and Ingress. We added three very important functions to the website: horizontal scalability, service discovery, and Layer 7 load balancing, which improved the stability and availability of the website. Basically, we solved the problems encountered in the “Beginner” section.

Although this website is still far from being truly practical, the framework is already well-developed. You can add other features on top of this foundation, such as creating certificate Secrets and making Ingress support HTTPS.

In addition, we ensured the high availability of all the website’s services. However, for the MariaDB database, although the Deployment can restart Pods promptly in the event of a failure, the new Pods will not inherit data from the old Pods. The previous website’s data will be completely lost, which is completely unacceptable.

Therefore, in the upcoming “Advanced” section, we will continue to learn about persistent storage objects like PersistentVolume, as well as stateful objects like StatefulSet, to further improve our website.

Homework #

Finally, it’s time for homework. There are two hands-on tasks:

  1. Can you deploy WordPress and Ingress Controller using DaemonSet?
  2. Can you create a Service object for Ingress Controller and expose it externally using NodePort?

Feel free to leave a comment and share your hands-on experience. If you find this article helpful, you can also share it with your friends to learn together. The next class will be a video lesson. See you in the next class!

Image