06 Practice Chapter Mastery of Core Kubernetes ( K8s) Practical Knowledge

06 Practice Chapter- Mastery of Core Kubernetes (K8s) Practical Knowledge #

After the introduction in the previous chapters, we have explained the main points of Kubernetes, such as core components, application deployment in Kubernetes, DevOps implementation in Kubernetes, and microservice scenarios. Considering that readers may find the knowledge superficial when they acquire it, it would be better to master the main technical skills of Kubernetes through a hands-on explanation.

Many readers encounter a lot of obstacles when they start to install a highly available Kubernetes cluster. Although there are many reference materials available online, there is still no official recommendation for an easy-to-use and fully continuous project. Even though users may experience pain after encountering difficulties, the knowledge required to install the cluster is not the focus of the examination in the Kubernetes administrator certification provided by the CNCF Foundation. In reality, what is assessed is the ability to use the kubectl command-line tool proficiently. This misconception causes many beginners to focus their energy on less important points. After all, the most important thing in our business scenario is to know how to use Kubernetes, rather than exploring its underlying technical implementation.

Remember, we need to focus the majority of our energy on understanding how to use Kubernetes, which can bring us performance improvement. Understanding the wide range of knowledge related to the implementation of underlying technologies, which accounts for 20%, requires gradual exploration and learning. Moreover, this knowledge complements the understanding of how to use Kubernetes. Without a good understanding, it is difficult to grasp the benefits brought by the implementation of underlying technologies.

Exercise 1: Running Pod Containers Using the Command Line #

Use the command line tool kubectl to execute the following command:

❯ kubectl run --image=nginx nginx-app
kubectl run --generator=deployment/apps.v1 is DEPRECATED and will be removed in a future version. Use kubectl run --generator=run-pod/v1 or kubectl create instead.
deployment.apps/nginx-app created

After successful execution, you should check whether it has been started by executing the following command:

❯ kubectl get po -o wide
NAME                        READY   STATUS    RESTARTS   AGE     IP            NODE                                                 NOMINATED NODE   READINESS GATES
nginx-app-d65f68dd5-rv4wz   1/1     Running   0          3m41s   10.4.47.234   gke-us-central1-cloud-okteto-com-pro-a09dced8-jxp8   <none>           <none>

To better understand the process of running a Pod, we often run the following command to check:

❯ kubectl describe po nginx-app-d65f68dd5
Name:               nginx-app-d65f68dd5-rv4wz
Namespace:          xiaods
Priority:           0    
PriorityClassName:  <none>
Node:               gke-us-central1-cloud-okteto-com-pro-a09dced8-jxp8/10.128.0.133
Start Time:         Sat, 02 May 2020 13:28:15 +0800     
Labels:             pod-template-hash=d65f68dd5
                    run=nginx-app
Annotations:        cni.projectcalico.org/podIP: 10.4.47.234/32
                    container.apparmor.security.beta.kubernetes.io/nginx-app: runtime/default
                    kubernetes.io/egress-bandwidth: 5M          
                    kubernetes.io/ingress-bandwidth: 5M           
                    kubernetes.io/limit-ranger: LimitRanger plugin set: cpu, memory request for container nginx-app; cpu, memory limit for container nginx-app
                    kubernetes.io/psp: cloud-okteto-enterprise-restrictive
                    seccomp.security.alpha.kubernetes.io/pod: runtime/default                                                                                                     
Status:             Running                                                                                                                                                       
IP:                 10.4.47.234                                                                                                                                                   Controlled By:      ReplicaSet/nginx-app-d65f68dd5
......Some code omitted......

Events:
  Type    Reason     Age    From                                                         Message
  ----    ------     ----   ----                                                         -------
  Normal  Scheduled  5m19s  default-scheduler                                            Successfully assigned xiaods/nginx-app-d65f68dd5-rv4wz to gke-us-central1-cloud-okteto-co
m-pro-a09dced8-jxp8
  Normal  Pulling    5m17s  kubelet, gke-us-central1-cloud-okteto-com-pro-a09dced8-jxp8  Pulling image "nginx"
  Normal  Pulled     5m17s  kubelet, gke-us-central1-cloud-okteto-com-pro-a09dced8-jxp8  Successfully pulled image "nginx"
  Normal  Created    5m17s  kubelet, gke-us-central1-cloud-okteto-com-pro-a09dced8-jxp8  Created container nginx-app
  Normal  Started    5m16s  kubelet, gke-us-central1-cloud-okteto-com-pro-a09dced8-jxp8  Started container nginx-app

In the above results, the event logs in the Events section are the most important information for quickly determining the status of Pod execution. They help quickly understand the running process of the Pod’s lifecycle.

Author’s recommendation: kubectl has many command options. Please refer to the Kubectl Cheat Sheet, provided by the Kubernetes Docs website, which is a good exercise for mastering the command-line skills you need.

Exercise 2: Assigning Independent Resources to Each Member of the Team #

After configuring and installing the Kubernetes cluster, the system will provide us with a Kubeconfig file. By default, it configures a cluster-admin account for us to globally manage all cluster resources. This account has super-administrator privileges. The account and the cluster are directly accessed through enterprise certificate authentication. We cannot change the password like a traditional business system to prevent security risks caused by file sharing. Therefore, we generally need to provide each developer with an independent account and a named resource space to isolate resource usage. The kubeconfig file is usually configured as follows:

apiVersion: v1
kind: Config
preferences: {}

clusters:
- cluster:
  name: development
- cluster:
  name: scratch

users:
- name: developer
- name: experimenter

contexts:
- context:
  name: dev-frontend
- context:
  name: dev-storage
- context:
  name: exp-scratch

To add a user to the configuration, use the following command:

kubectl config --kubeconfig=config-demo set-credentials developer --client-certificate=fake-cert-file --client-key=fake-key-seefile

If you want to delete a user, you can run:

kubectl --kubeconfig=config-demo config unset users.<name>

Of course, we also need to provide RBAC configuration for this user account in the cluster, which is a complex process and is not suitable for practical work. Since this configuration is cumbersome, do we have any ready-made tools to help us implement multi-user configuration? The answer is yes, it is the Kiosk multi-tenant management suite:

# Install Kiosk
# Install kiosk with helm v3
kubectl create namespace kiosk
helm install kiosk --repo https://charts.devspace.sh/ kiosk --namespace kiosk --atomic

$ kubectl get pod -n kiosk

NAME                     READY   STATUS    RESTARTS   AGE
kiosk-58887d6cf6-nm4qc   2/2     Running   0          1h

# Configure user accounts and permissions
kubectl apply -f https://raw.githubusercontent.com/kiosk-sh/kiosk/master/examples/account.yaml

# View your own accounts as a regular account user
kubectl get accounts --as=john

# View the details of one of your accounts as a regular account user
kubectl get account johns-account -o yaml --as=john

Note that Kubernetes provides multiple types of users, common types include: X509 certificate users and Service Account Tokens. The current mainstream user management method tends to use the ServiceAccount mode that can dynamically generate Tokens:

USER_NAME="john"
kubectl -n kiosk create serviceaccount $USER_NAME

# Configure the kubeconfig file for the john user
KUBECONFIG_PATH="$HOME/.kube/config-kiosk"

kubectl config view --minify --raw >$KUBECONFIG_PATH
export KUBECONFIG=$KUBECONFIG_PATH

CURRENT_CONTEXT=$(kubectl config current-context)
kubectl config rename-context $CURRENT_CONTEXT kiosk-admin

CLUSTER_NAME=$(kubectl config view -o jsonpath="{.clusters[].name}")
ADMIN_USER=$(kubectl config view -o jsonpath="{.users[].name}")

SA_NAME=$(kubectl -n kiosk get serviceaccount $USER_NAME -o jsonpath="{.secrets[0].name}")
SA_TOKEN=$(kubectl -n kiosk get secret $SA_NAME -o jsonpath="{.data.token}" | base64 -d)

kubectl config set-credentials $USER_NAME --token=$SA_TOKEN
kubectl config set-context kiosk-user --cluster=$CLUSTER_NAME --user=$USER_NAME
kubectl config use-context kiosk-user

# Optional: delete admin context and user
kubectl config unset contexts.kiosk-admin
kubectl config unset users.$ADMIN_USER

export KUBECONFIG=""

# If not already set, then:
KUBECONFIG_PATH="$HOME/.kube/config-kiosk"

export KUBECONFIG=$KUBECONFIG_PATH

kubectl ...

Recommendation: After the explanation, I believe readers will have a deeper understanding of this part. We can use Kiosk to indirectly simplify the operational and maintenance costs of user management, and it is recommended to use it.

Exercise 3: Implement application release strategies such as canary, blue-green, and rollback through orchestration policies #

The main purpose of learning Kubernetes is to help enterprises establish more advanced application release platforms, rely on the innovative capabilities of open source technology, and enable enterprises to have application release capabilities supported by major companies without much development input.

Native Kubernetes supports the rolling update of individual application container Pods, and many companies have implemented similar canary and blue-green release capabilities based on this capability. However, often when reviewing the implementation effects, it still feels wrong. Yes, Kubernetes itself does not support advanced release features such as canary and blue-green at the application layer, so this kind of transformation is going the wrong way.

In the latest version of Kubernetes, the Ingress object has been introduced, which provides north-south traffic drainage services for Layer 7. The default common component is the NGINX Ingress Controller + Nginx. Because Nginx supports adding header tags in traffic requests, it allows us to perfectly support business entry traffic canary release. The following is the annotation for canary release with Nginx Ingress:

nginx.ingress.kubernetes.io/canary    "true" or "false"
nginx.ingress.kubernetes.io/canary-by-header    string
nginx.ingress.kubernetes.io/canary-by-header-value    string
nginx.ingress.kubernetes.io/canary-by-header-pattern    string
nginx.ingress.kubernetes.io/canary-by-cookie    string
nginx.ingress.kubernetes.io/canary-weight    number

Based on the traffic division of the Request Header, the detailed explanations for some annotations are as follows:

nginx.ingress.kubernetes.io/canary-by-header

Used to inform the Ingress to route the request to the service specified in the Canary Ingress by the specified Header. When the request Header is set to “always”, it will be routed to the Canary. When the Header is set to “never”, it will never be routed to the Canary. For other values, the request Header will be ignored, and the request will be compared with other Canary rules by priority.

nginx.ingress.kubernetes.io/canary-by-header-value

The Header value to be matched, used to inform the Ingress to route the request to the service specified in the Canary Ingress. When the request Header is set to this value, it will be routed to the Canary. Other Header values will be ignored, and the request value will be compared with the priority of other Canary rules. This annotation must be used together with canary-by-header.

nginx.ingress.kubernetes.io/canary-by-header-pattern

The “nginx.ingress.kubernetes.io/canary-by-header” annotation allows you to customize the Header value to control the traffic, instead of using a program-defined Header. If the “nginx.ingress.kubernetes.io/canary-by-header” annotation is not defined, there will be no effect.

Here is a complete example:


apiVersion: extensions/v1beta1 kind: Ingress metadata: annotations: nginx.ingress.kubernetes.io/canary: “true” nginx.ingress.kubernetes.io/canary-by-header: “new” nginx.ingress.kubernetes.io/canary-by-header-value:“xxx” labels: app: demo name: demo-ingress namespace: demo-canary spec: rules:

  • host: canary.example.com http: paths:
    • backend: serviceName: demo-canary servicePort: 80 path: /

The use case for this kind of canary deployment is limited. Generally, when enterprise applications are released, they include dozens of components, which are further segmented by many gateways. The urgent need now is how to effectively manage the traffic for these segmented components. This traffic strategy is in line with the current architectural direction, moving towards a microservices mesh. One well-known open-source framework for this is Istio. In my opinion, using a service mesh to observe and direct application traffic is more flexible.

When it comes to Istio, I believe everyone has deployed the Bookinfo project, where the most important feature is the marking and switching of business traffic:

![bookinfo](../images/54f84c90-d65e-11ea-aefa-4fa1d18dcf14.jpg)

Istio achieves blue-green deployment by controlling headers:

![blue green](../images/605f0010-d65e-11ea-b558-cd3c105f83ae.jpg)

Istio achieves canary deployment by changing header values:

![canary deployment](../images/73f16050-d65e-11ea-8a75-23aebf65f18c.jpg)

**My suggestion**: Using a service mesh to switch application traffic is a more natural design approach. It relies on the underlying technology of Kubernetes' classic rolling update. However, what sets a service mesh apart from native Kubernetes is that it introduces more advanced application assurance features, such as circuit breaking, blacklisting, and rate limiting, in addition to guiding traffic through headers. These features are worth using.

### Exercise 4: Configure a reasonable network solution and visualize traffic data

After deploying an application to a Kubernetes cluster, the most urgent requirement is visualizing your business data. Currently, many users are still struggling with choosing the best network solution. From my perspective, this choice depends on your hardware network infrastructure. Furthermore, the performance of mainstream container network plugins is very close to native network performance. The poor experience with earlier user networks is long gone.

The improvement in container technology relies entirely on the technical capabilities of the kernel, including container networks. Starting with Kernel 4.9, the extended Berkeley Packet Filtering (eBPF) feature was introduced, allowing users to attach custom programs to various points in user space, including sockets, trace points, and packet reception, for data reception and processing. It provides a powerful tool for implementing network security policies and visualization. For example, the open-source observability project Hubble utilizes eBPF capabilities:

![img](../images/9678a3e0-d65e-11ea-855e-35d62ffd230b.jpg)

This project originated from the mainstream native container network **Cilium**, which provides support. I mention it here because it uses eBPF technology to implement container network traffic forwarding directly, replacing iptables. It is an open-source project capable of challenging **Calico** container network solutions. You may be interested in a comparison, so here is a reference:

> <https://itnext.io/benchmark-results-of-kubernetes-network-plugins-cni-over-10gbit-s-network-updated-april-2019-4a9886efe9c4>

You can see that the performance difference is minimal:

![img](../images/b98f7d40-d65e-11ea-8a86-ed86f9ad27de.jpg)

**My suggestion**: The choice of container network is no longer a complex decision. Any network solution can cover most business needs. However, newer container network solutions, such as Cilium, start to provide more user-friendly features besides network connectivity, such as multi-cluster networking, replacing Kubeproxy, and data visualization. They are worth your attention and exploration.

### Exercise 5: Make good use of official documentation and tools to solve problems

Kubernetes releases a new major version approximately every 3 months. If we spend our days reading various documentation, it's likely to make our heads explode. To solve application deployment problems, it is essential to have access to the most up-to-date, comprehensive, and authoritative information about Kubernetes. The official community documentation site (<https://kubernetes.io/docs>) is the most up-to-date, comprehensive, and authoritative source. Do not rely on other sources; the official documentation has been read and supervised by developers worldwide and is more reliable and timely.

Additionally, Kubernetes is one of the components in the technology blueprint of the Cloud Native Computing Foundation (CNCF). When you encounter technical difficulties, refer to the CNCF technology landscape to find inspiration:

> <https://landscape.cncf.io/>

I believe you will find satisfactory architectural advice and solutions there.

**My suggestion**: If you have questions about concepts, search <https://kubernetes.io/docs> for the latest information. If you have questions about technical architecture, refer to the CNCF technology landscape (<https://landscape.cncf.io/>) for cloud-native architecture blueprints. I believe you can find clues there too.

### Summary

As the author, throughout my practice and exercises, I have found that enterprise fear concerning application containers is indeed present. To overcome this fear, you need to practice and verify the feasibility of technical details in your local environment. Because each company's hardware infrastructure is different, do not blindly follow the successful experiences of major companies in implementing Kubernetes solutions. Practice based on your own business scenarios and needs is the goal; become a true architect and owner of your architecture.