43 Technical Progress Virtualization Technology Progress

43 Technical Progress Virtualization Technology Progress #

Hello, I am Kong Lingfei.

In the previous three lectures, I introduced the deployment methods for traditional applications. However, as software architecture enters the era of cloud-native, we are increasingly using cloud-native architecture to build and deploy our applications. In order to demonstrate how to deploy an IAM application using cloud-native methodology, I will now explain how to deploy an IAM application based on Kubernetes.

Deploying an IAM application in a Kubernetes cluster involves some important cloud-native technologies, such as Docker, Kubernetes, microservices, and so on. In addition, cloud-native architecture also includes many other technologies. In order to familiarize you with the relevant technologies required for deployment in advance and to gain a comprehensive understanding of the current hottest cloud-native architecture, in this lecture, I will adopt a technological evolution approach to explain in detail the evolution of virtualization technologies in the cloud-native technology stack.

Because this lecture involves many technology stacks, I will focus on the evolution process and will not go into detail on the specific implementation principles and usage methods of each technology. If you are interested, you can learn on your own or refer to this material I have prepared for you: awesome-books.

Before discussing this evolution process, let’s first consider this question: Why do we use the cloud?

Why do we use the cloud? #

The reason for using the cloud is actually quite simple. We just want to deploy a service on the cloud that can stably provide business capabilities to the outside world. This service is deployed in the form of an application on the cloud. To start an application, we also need to request system resources. In addition, we need to ensure that the application can iterate and release quickly, and can quickly recover after a failure. This requires us to manage the lifecycle of the application.

The application, system resources, and application lifecycle management are the three dimensions that constitute all our demands for the cloud, as shown in the following diagram:

Image

In the next two lessons, I will focus on these three dimensions and give you a detailed introduction to the technological evolution of each dimension. In this lesson, I will first introduce the technological evolution of the system resources dimension. In lesson 44, I will then introduce the technological evolution of the application dimension and the application lifecycle management dimension.

Currently, there are three forms of system resources, namely physical machines, virtual machines, and containers. These three forms of system resources have evolved around virtualization. Therefore, introducing the technological evolution of system resources is actually introducing the technological evolution of virtualization.

Next, let’s take a look at how virtualization technology has evolved.

Evolution of Virtualization Technology #

The concept of virtualization actually emerged in the 1960s. However, due to technological and environmental constraints, virtualization technology remained dormant for a period of time until the 21st century when virtual machines emerged. This led to a burst of growth in virtualization technology, which gradually matured.

So, what is virtualization technology? Simply put, it is the technology that divides the hardware and system resources of a computer into logical groups, generating only a logical perspective. Through virtualization technology, we can run multiple virtual machine processes on a single computer, thereby maximizing the utilization of computer hardware.

Virtualization comes in many forms, such as operating system virtualization, storage virtualization, network virtualization, desktop virtualization, etc. Among them, the most important is operating system virtualization, which is supported by underlying CPU, memory, storage, and network virtualization, collectively known as computing resources.

Since computing resource virtualization is dominant in the field of virtualization, when we talk about the evolution of virtualization technology, we are actually referring to the evolution of computing resource technology. In my opinion, the evolution of virtualization technology can be summarized as follows: physical machine stage -> virtual machine stage -> container stage (Docker + Kubernetes) -> serverless stage.

Physical Machine Stage #

As mentioned earlier, virtualization technology encompasses many aspects, but the entire evolution of virtualization technology revolves around CPU virtualization technology. This is because the correct implementation of memory virtualization, I/O virtualization, and others relies on the correct processing of certain sensitive instructions in memory and I/O, which involves CPU virtualization. Therefore, CPU virtualization is the core of virtualization technology. In this lecture, I will explain the evolution of virtualization technology based on CPU virtualization. Here, let me first introduce the relevant knowledge of CPU in the physical machine stage.

The CPU is composed of a series of instruction sets, which can be divided into two types: privileged instruction sets and non-privileged instruction sets. Privileged instruction sets are those that can change the system state, while non-privileged instruction sets are those that do not affect the system state. Let me give you an example to make it clear: writing to memory is a privileged instruction set because it can change the system’s state; reading from memory is a non-privileged instruction set because it does not affect the system’s state.

Since non-privileged instruction sets may affect the entire system, chip manufacturers have designed a new mode called protected mode on the x86 architecture, which can prevent illegal access to system resources by non-privileged instruction sets.

Protected mode is implemented using rings. On the x86 architecture, there are a total of four rings, each with different permission levels: Ring 0 has the highest privilege level and can access all system resources, while Ring 3 has the lowest privilege level. The kernel runs on Ring 0, and applications run on Ring 3. Applications running on Ring 3 need to use system calls to invoke kernel functions on Ring 0 in order to request system resources.

This approach has a benefit: it can prevent applications from directly requesting system resources, thus affecting system stability. By having the kernel with higher permission levels schedule and allocate resources, the entire system can be more efficient and secure.

The relationship between rings and invocation on the x86 architecture is shown in the following diagram:

image

In the physical machine stage, physical resources are provided externally. This resource provisioning approach faces many problems, such as high cost, maintenance difficulties, the need for building computer rooms, installing cooling equipment, and the inconvenience of creating and destroying servers. Therefore, in the era of cloud computing, compared to physical machines, we use virtual machines more frequently. Now let’s take a look at the virtual machine stage.

Virtual Machine Stage #

Here, before discussing virtualization technology, I would like to first introduce the virtualization vulnerabilities of x86 architecture. The evolution of CPU virtualization technology mainly revolves around solving these vulnerabilities.

Virtualization Vulnerabilities #

A virtualization environment is divided into three parts: hardware, virtual machine monitor (also known as VMM, Virtual Machine Manager), and virtual machines.

You can think of a virtual machine as an efficiently isolated replica of a physical machine. It has three characteristics: homogeneity, efficiency, and controlled resources. These three characteristics determine that not all architectures can be virtualized. For example, the most commonly used x86 architecture is not a virtualizable architecture, and we call it a virtualization vulnerability.

After the emergence of virtualization technology, a new concept arose: sensitive instructions. Sensitive instructions are instructions that can manipulate privileged resources, such as modifying the virtual machine running mode, physical machine state, and reading/writing sensitive registers/memory. Obviously, all privileged instructions are sensitive instructions, but not all sensitive instructions are privileged instructions. The relationship between privileged instructions and sensitive instructions can be represented simply by the following diagram:

image In a virtualized architecture, all sensitive instructions should be privileged instructions. In the x86 architecture, some sensitive instructions are not privileged instructions. The simplest example is attempting to access or modify the virtual machine mode. Therefore, the x86 architecture has virtualization vulnerabilities.

Evolution of Hypervisor Technology #

To address the virtualization vulnerabilities in the x86 architecture, a series of virtualization technologies have been developed, with the most core one being Hypervisor technology. So next, I will introduce the evolution of Hypervisor technology.

Hypervisor, also known as Virtual Machine Monitor (VMM), can be used to create and run virtual machines (VMs). It is an intermediate software layer that runs between the underlying physical server and the operating system, allowing multiple operating systems and applications to share hardware. By allowing the Hypervisor to share system resources (such as memory and CPU resources) through virtualization, a host computer can support multiple guest virtual machines.

The relationship between the Hypervisor, the physical machine, and the virtual machine is as follows:

Image

In chronological order, the development of Hypervisor technology has gone through the following 3 stages:

  1. Software-Assisted Full Virtualization: This virtualization technology emerged in 1999 and includes three technologies: interpretation (e.g. Bochs), dynamic translation (e.g. VirtualBox), and binary translation (e.g. Vmware, Qemu).
  2. Para-virtualization: This virtualization technology emerged in 2003 and is also known as “class” virtualization. The typical representative of Hypervisors is Xen.
  3. Hardware-Assisted Full Virtualization: This virtualization technology emerged in 2006, and KVM is the typical representative of Hypervisors. The mainstream virtualization technology currently used is hardware-assisted full virtualization represented by KVM.

Next, I will briefly introduce these three stages.

Let’s start with the first stage, Software-Assisted Full Virtualization, which can be further divided into three stages: interpretation, dynamic translation, and binary translation.

  1. Interpretation:

Simply put, the process of interpretation is to take an instruction, simulate the execution effect of this instruction, and then take the next instruction. This technology is relatively simple and easy to implement, with low complexity. During execution, the compiled binary code is not directly loaded into the physical CPU, but is decoded by the interpreter one by one and then called into the corresponding function to simulate the instruction’s functionality. The interpretation process is shown in the following diagram:

Image

Because each instruction needs to be simulated, it solves the virtualization vulnerability and can also simulate a heterogeneous CPU architecture. For example, it can simulate an ARM architecture virtual machine on the x86 architecture. However, because each instruction needs to be simulated without distinction, the performance of this technology is very low.

  1. Dynamic Translation:

Due to the significant performance loss of interpretation and the fact that the virtual CPU simulated in the virtual machine has the same architecture as the physical CPU (homogeneous), most instructions can be executed directly on the physical CPU during CPU virtualization. Therefore, more optimized simulation techniques can be used to compensate for the virtualization vulnerability.

The technique of dynamic translation allows most instructions to be executed directly on the physical CPU, while replacing sensitive instructions in the operating system with branch instructions or instructions that will be trapped into the VMM. In this way, when the VMM runs into sensitive instructions, control flow will enter the VMM and be executed by the VMM. The process is shown in the following diagram: Image

This approach is used because most instructions do not need to be simulated and can be run directly on the CPU, so the performance loss is relatively small and the implementation is relatively simple.

  1. Binary code translation

This is the main method of software-assisted full virtualization, which was used by early VMware. Binary code translation involves creating a cache in the VMM and placing the translated code in the cache. When executing a certain instruction, the translated instruction corresponding to that instruction is directly retrieved from the memory and executed on the CPU.

In terms of performance, binary code translation has its own advantages and disadvantages compared to scanning and patching techniques, but it is the most complex to implement. The process is shown in the following figure:

Image

Having seen this, you may be confused about the concepts of simulation and translation. Let me explain the difference here: simulation is to simulate Action A as Action B, while translation is to translate Instruction A to Instruction B. There is a fundamental difference between the two.

Next, let’s look at the second stage of Hypervisor technology development, Para-virtualization.

Software-assisted full virtualization translates or simulates x86 instructions, which inevitably incurs some performance loss. These performance losses are unacceptable in some production-level scenarios. Therefore, in 2003, Para-virtualization technology, also known as semi-virtualization/class virtualization, emerged. Compared to previous virtualization technologies, Para-virtualization has greatly improved performance, approaching that of native physical machines.

The general principle of Para-virtualization is as follows: the Hypervisor runs in Ring 0 and modifies the guest operating system kernel, replacing sensitive instructions with hypercalls. A hypercall is a function that directly communicates with the VMM, bypassing the vulnerabilities of virtualization (i.e., all sensitive instructions are captured by the VMM). Furthermore, there is no simulation or translation process, resulting in the highest performance. The process is shown in the following figure:

Image

Because the guest operating system needs to be modified, it cannot simulate certain closed-source operating systems, such as the Windows series. Additionally, modifying the guest operating system kernel requires some development and maintenance efforts. Therefore, as hardware-assisted full virtualization technologies matured, Para-virtualization gradually became obsolete.

Next, let’s look at the third stage of Hypervisor technology development, Hardware-assisted Full Virtualization.

In 2006, Intel and AMD introduced hardware-level support for virtualization, such as Intel’s VT-X technology and AMD’s SVM. The core idea is to introduce a new operating mode, which can be understood as adding a new CPU Ring -1 with higher privileges than Ring 0. This allows the VMM to run in Ring -1, while the guest kernel runs in Ring 0.

Under normal circumstances, the core instructions of the guest can be directly executed by the computer system hardware without going through the VMM. When the guest executes sensitive instructions, the CPU intercepts these instructions at the hardware level and switches to the VMM to handle them, thereby bypassing virtualization vulnerabilities. This is shown in the following figure:

Image

Because hardware-assisted virtualization is supported at the hardware level, it offers higher performance than software emulation, and it does not require modifications to the operating system. Therefore, even today, hardware-assisted full virtualization remains the mainstream virtualization approach.

Next, let’s look at the third stage of virtualization technology evolution, the container stage.

Container Stage #

In 2005, a new virtualization technology called container technology emerged. Containers are a lightweight virtualization technology that can provide multiple isolated operating system environments on a single host. Each container has a unique writable file system and resource quota, achieved through a series of namespace isolations of processes.

Docker Container Engine #

The representative project of container technology is Docker. Docker is a container project launched by Docker Inc. in 2013. Due to its lightweight and ease of use, Docker quickly gained widespread adoption. The widespread use of Docker has transformed the shape of system resources from the virtual machine stage to the container stage.

Based on Docker containerization technology, developers can package their applications, dependencies, and configurations into a portable container, and then deploy it on any popular Linux/Windows machine. Developers do not need to worry about the underlying system and environment dependencies, making containers an ideal tool for deploying individual microservices.

Docker uses Linux Namespace technology for resource isolation and Cgroup technology for resource allocation, resulting in higher resource utilization. Docker shares the same kernel as the host machine and does not need to simulate the entire operating system, so it has faster startup time. In a Docker image, all dependencies and configurations are already packaged, providing a consistent runtime environment across different environments, enabling faster migration. Additionally, these features of Docker also promote the development of DevOps technologies.

Now let’s compare Docker with virtual machines to experience the power of Docker. The architecture comparison between the two is shown in the following figure:

Image

As you can see, containers are much lighter than virtual machines because they do not need to simulate a complete operating system. Therefore, compared to virtual machines, containers have the following advantages:

Image

From this table, you can see that Docker has significant advantages over virtual machines in terms of startup time, disk usage, performance, system support, resource utilization, and environment configuration. These advantages make Docker a more popular application deployment medium than virtual machines.

You may wonder, is Docker so good that it doesn’t have any drawbacks? Obviously not, Docker also has its limitations.

Let’s first take a look at what production environment Docker containers look like: the number of containers in a production environment can be extremely large, with complex relationships, and the applications in a production environment are inherently clustered, with high availability and load balancing capabilities. Docker is more used to solve the deployment problems of individual services and cannot solve these problems in a production environment. Additionally, Docker containers on different nodes cannot communicate with each other.

However, these problems can be solved through container orchestration technologies. There are currently many excellent container orchestration technologies in the industry, with popular options including Kubernetes, Mesos, Docker Swarm, and Rancher. In recent years, with the development and growth of Kubernetes, Kubernetes has become the de facto standard for container orchestration.

Kubernetes Container Orchestration Technology #

Because we will be deploying IAM applications based on Kubernetes later, I will provide a detailed introduction to Kubernetes service orchestration technology.

Kubernetes is an open-source container orchestration technology (orchestration can be simply understood as scheduling and management) developed by Google, used for the automated deployment, scaling, and management of containerized applications. It originated from Google’s internal Borg project. The main features of Kubernetes include networking, service discovery and load balancing, rolling updates & rollbacks, self-healing, secure configuration management, resource management, automatic scaling, monitoring, and service health checks.

Through these features, Kubernetes solves the problems that Docker faces in a production environment. Kubernetes and Docker complement each other, and the success of Kubernetes has led to greater adoption of Docker, ultimately making Docker a more popular computing resource provisioning method than virtual machines.

Next, I will introduce the basic concepts of K8S (Kubernetes) based on the following architecture diagram:

Image

Kubernetes adopts a Master-Worker architecture. Among them, the Master node is the most important node in Kubernetes, where the core components of Kubernetes are deployed, which together make up the Kubernetes Control Plane. The Worker, also known as the Node Cluster, is the cluster of nodes. Each Node represents specific computing resources, which can be either a physical server or a virtual machine.

Let’s first introduce the components on the Master node.

  • Kube API Server: It provides the only entry point for resource operations and provides mechanisms for authentication, authorization, access control, API registration and discovery, etc.
  • Kube Scheduler: Responsible for resource scheduling, scheduling Pods to corresponding nodes according to predetermined scheduling policies.
  • Kube Controller Manager: Responsible for maintaining the state of the cluster, such as fault detection, automatic scaling, rolling updates, etc.
  • Cloud Controller Manager: This component was added in Kubernetes 1.6 to interact with the underlying cloud providers.
  • Etcd: A distributed key-value store that is separate from Kubernetes. It mainly stores critical metadata and supports horizontal scaling to ensure high availability of metadata. It implements strong consistency based on the Raft algorithm, and its unique watch mechanism is a key feature of Kubernetes.

After introducing the Master, let’s take a look at the components required for each Kubernetes Node.

  • Kubelet: Responsible for maintaining the lifecycle of containers, as well as managing volumes (CVI) and networks (CNI).
  • kube-proxy: kube-proxy is a network proxy running on each node in the cluster, maintaining network rules on the node. It allows network communication with Pods from within or outside the cluster, and is responsible for providing service discovery and load balancing for Services within the cluster.
  • Container Runtime: Responsible for image management and the actual execution of Pods and containers (CRI). The default container runtime is Docker.

The Services, Deployments, Pods, etc. shown in the architecture diagram are not components but Kubernetes resource objects, which we will introduce later. Here, let me briefly introduce the UI dashboard and kubectl in the architecture diagram.

  • UI dashboard is the web-based control panel provided by Kubernetes. It can be used to control the cluster in various ways and directly interact with the API Server. It provides a visual interface to create Kubernetes objects, view Pod running status, etc. The UI dashboard interface is shown in the figure below:

Image

  • kubectl is the client tool for Kubernetes which provides numerous commands, subcommands, and command-line options. It supports developers or operators to quickly operate Kubernetes clusters from the command line, such as managing and querying various Kubernetes resources, labeling resources, and more. The screenshot below shows an example of using the kubectl describe service iam-pump command to get detailed information about iam-pump:

Image

Kubernetes has a variety of Objects. To view all the kinds of Objects, you can use the kubectl api-resources command. We use these Objects to create, delete, and perform other operations on different types of Kubernetes resources. Because the core purpose of this chapter is to introduce the evolution of cloud-native technologies, we will not go into detail about the usage of Kubernetes resource objects. If you are interested, you can refer to the official Kubernetes documentation.

Now, let me briefly introduce some basic information about Kubernetes Objects and the three types of objects that appear in the architecture diagram: Deployments, Pods, and Services.

Here is a typical YAML configuration file for a Kubernetes object:

apiVersion: v1
kind: Pod
metadata:
  name: nginx
  labels:
    name: nginx
spec:
  # ...

In this configuration file, apiVersion and kind together determine which component handles the YAML file. The former represents the API group that the configuration file uses, and the latter represents a resource type within an API group. Here, v1 and Pod represent the Pod resource type within the core API group api/v1.

The metadata section contains metadata about the object, including name, namespace, labels, and annotations. The name needs to be unique within the namespace and serves as the unique identifier for the object. labels and annotations are used for labeling and adding informative notes to the object, respectively.

Next, I will introduce Pod, Deployment, and Service, which are the three types of objects.

  1. Pod In general, we don’t deploy Pods directly, but deploy a Kubernetes object like Deployment or StatefulSet instead. Deployments are typically used for stateless services, while StatefulSets are used for stateful services that require volume for data persistence. Here’s an example of deploying two Pods:
apiVersion: apps/v1
kind: Deployment
metadata:
  name: my-nginx
spec:
  selector:
    matchLabels:
      run: my-nginx
  replicas: 2
  template:
    metadata:
      labels:
        run: my-nginx
    spec:
      containers:
      - name: my-nginx
        image: nginx
        ports:
        - containerPort: 80
  1. Service

Service is another common object in Kubernetes that acts as a load balancer for a group of Pods, associating them using a selector.

In the following example, the label run: my-nginx is used. This Service is bound to the nginx server deployed by the above Deployment:

apiVersion: v1
kind: Service
metadata:
  name: my-nginx
  labels:
    run: my-nginx
spec:
  ports:
  - port: 80
    protocol: TCP
  selector:
    run: my-nginx

Finally, I would like to introduce the container cloud platforms based on Kubernetes. Major public cloud providers have container management platforms based on Kubernetes. Among the well-performed container service platforms in China are Tencent Kubernetes Engine (TKE) and Alibaba Cloud Container Service for Kubernetes (ACK).

TKE, based on native Kubernetes, provides a container-centric solution that addresses users’ environment-related issues during development, testing, and operations, helping to reduce costs and improve efficiency. Tencent Kubernetes Engine (TKE) is fully compatible with the native Kubernetes API and extends Tencent Cloud’s Kubernetes plugins, such as cloud disks and load balancers. It also provides a high-reliability and high-performance network solution based on Tencent Cloud’s private network.

Serverless Stage #

What is the evolution direction of virtualization technology after the container stage? Let’s move on to the Serverless stage.

In 2014, AWS introduced Lambda, which is a Serverless service. Since then, Serverless has gained increasing attention and become one of the most notable technologies in recent years. Let me first explain what Serverless is.

Serverless literally means “without servers.” However, it does not mean that Serverless doesn’t require servers. Instead, the management of servers and resource allocation is invisible to users and maintained by platform developers. Serverless is not a specific programming framework, library, or tool; it is a software system architecture concept and approach. Its core idea is that users don’t need to worry about the underlying resources that support the application services, such as CPU, memory, and databases. They only need to focus on their own business development.

Serverless has many characteristics, and the main ones are as follows:

  • Infinite elastic computing capacity: Automatically scaling instances horizontally based on requests, providing almost unlimited scalability.
  • “Zero” operations and maintenance: No need to request and maintain servers.
  • Extreme scalability: Able to flexibly scale based on metrics such as CPU, memory, and request volume and support scaling down to 0.
  • Pay-as-you-go: True usage-based billing.

In my opinion, Serverless has three technical forms: Function as a Service (FaaS), Serverless Containers, and Backend as a Service (BaaS), as shown in the figure below:

Image

Among these three forms, Serverless Containers are the core, while FaaS and BaaS play supporting roles. Serverless Containers can host the core architecture of the business, FaaS can adapt well to trigger scenarios, and BaaS can meet various other Serverless component requirements, such as Serverless databases and storage.

All major public cloud providers have corresponding products for these three forms of Serverless. Among them, the outstanding products are Tencent Cloud’s Serverless Cloud Function (SCF), Elastic Kubernetes Service (EKS), and TDSQL-C. Let me introduce each of them:

  • EKS: Elastic Kubernetes Service (EKS) is a service mode provided by Tencent Cloud Container Service that allows you to deploy workloads without purchasing nodes. EKS is fully compatible with native Kubernetes and supports resource purchase and management in a native way, with billing based on the actual resource usage of the containers.
  • SCF: Serverless Cloud Function (SCF) is a serverless execution environment provided by Tencent Cloud for enterprises and developers, allowing you to run code without the need to purchase and manage servers. You only need to write the core code in languages supported by the platform, set the conditions for running the code, and the code will run elastically and securely on Tencent Cloud’s infrastructure.
  • TDSQL-C: Tencent Distributed Cloud Databases (TDSQL-C) is a self-developed, new generation, high-performance, highly available, enterprise-level distributed cloud database provided by Tencent Cloud. It has high throughput and reliability, among other advantages.

As we mentioned at the beginning, our demands for the cloud consist of three dimensions: application, system resources, and application lifecycle management. With this, I have finished introducing the technical evolution of the system resource dimension. In the next lecture, I will introduce the technical evolution of the application dimension and application lifecycle management dimension.

Summary #

In this lecture, I mainly discussed the evolution of virtualization technology and the evolution of system resource dimensions.

The evolution process of virtualization technology is as follows: physical machine stage -> virtual machine stage -> container stage -> Serverless stage. Among them, the evolution technology from physical machine to virtual machine stage is mainly to address the virtualization vulnerabilities of x86 architecture. In order to virtualize CPU, memory, and I/O, sensitive instructions need to be captured to prevent these instructions from modifying the system state and affecting system stability. Some sensitive instructions in the x86 architecture are not privileged instructions, which means that these instructions can be executed directly on the physical CPU from the guest without capturing, which may affect the system state. Therefore, we say that x86 architecture has virtualization vulnerabilities.

In the virtual machine stage, three different virtualization technologies emerged: software-assisted full virtualization, paravirtualization, and hardware-assisted full virtualization. Because hardware-assisted full virtualization technology does not require modification of guest kernel and has performance close to physical machines, it has become the mainstream virtualization technology and KVM as the de facto technical standard.

Because container technology is more lightweight than virtual machines, combined with the emergence of Docker and Kubernetes projects, large-scale use of container technology has become feasible, so in recent years, the form of providing system resources has changed from virtual machines to containers.

As for the ultimate form of system resources, I believe it will be Serverless. In Serverless technology, there are three forms of technology: function as a service (FaaS), Serverless containers, and Backend as a Service (BaaS). In the process of business architecture becoming Serverless, the entire deployment architecture will be mainly based on Serverless containers, with FaaS as support.

After-class Exercises #

  1. Docker’s isolation is weaker than that of virtual machines. Think about whether there is a better way to quickly start a lightweight container with better isolation than Docker.
  2. Think about how to transform an ordinary Kubernetes cluster into a serverless Kubernetes cluster.

Feel free to leave a message in the comment section to discuss with me. See you in the next lecture.