02 Basic Concepts of Kubernetes for Beginners

02 Basic Concepts of Kubernetes for Beginners #

Well, let’s finally get into the main topic and abandon the rigid lecture mode. We will begin our exploration of Kubernetes with the example of a fictional newly established project team. (Hereinafter, we’ll refer to Kubernetes as K8S for short.)

At present, the project team only has one member, whom we’ll call Xiaozhang. When the project team was just established, Xiaozhang hadn’t figured out exactly what they would do, but it was certain that they needed to provide services externally. So he applied to the company for a server.

Node #

What can this server be used for? You can use it to run services, databases, tests, and so on. We refer to the tasks performed by the server as work, so it is a worker node in Kubernetes (K8S).

A node can be a physical machine or a virtual machine. For our project, this server is the node in K8S.

Node Status #

When we get this server, the first thing we do is log in and check its basic configuration and information. The same applies to a node that joins a K8S cluster. We need to check its status and report it to the cluster’s master. Let’s take a look at the information we care about.

Address #

First of all, we care about the IP address of the server, including the internal and external IP addresses. In the context of a K8S cluster, the concept is similar. The internal IP can be accessed within the K8S cluster, while the external IP can be accessed from outside the cluster.

Secondly, we also care about the hostname of our server. For example, executing the hostname command on the server will give us the hostname. In the K8S cluster, the hostname of each node is also recorded. Of course, we can override the default hostname by passing the --hostname-override parameter to the Kubelet. (We will explain what Kubelet is later.)

Information #

After that, we need to check the basic information of the server, such as the system version. We can use commands like cat /etc/issue or cat /etc/os-release to view it. In a K8S cluster, these basic information of each node is recorded.

Capacity #

We usually pay attention to the number of CPU cores and the amount of memory of the server. You can use cat /proc/cpuinfo to check the number of CPU cores and cat /proc/meminfo or free to check the memory size. In a K8S cluster, these information will be collected and used to calculate the number of Pods that can be scheduled on this node. (We will explain what a Pod is later.)

Conditions #

For the server we obtained, we care about the above basic information and make judgments based on them to determine if this machine meets our needs. The same applies to a K8S cluster. When all the above information meets the requirements, the cluster will mark the recorded information of that node as Ready (Ready = True), indicating that our server has been officially delivered. Let’s take a look at the other parts.

Deployment and Pod #

Now that Xiao Zhang has obtained the server, which is already in a usable state, although it is not yet known exactly what needs to be done, let’s deploy a homepage to announce the establishment of the project team.

Let’s take a look at the general approach. First, we’ll create a static webpage, for example, called index.html. Then, we’ll start an Nginx or any other web server on the server to provide access to index.html.

To install and configure Nginx, you can refer to the official documentation of Nginx. The simplest configuration might look like this (keeping only the key parts):

location / {
    root   /www;
    index  index.html;
}

In the case of Kubernetes (K8S), what we desire is a service that can provide access to index.html. This concept can be understood as a Deployment, which represents the expected desired state.

And for the combination of Nginx and index.html, it can be understood as the concept of a Pod, which acts as the smallest scheduling unit.

Container Runtime #

Although we currently only have the official website deployed, in order to avoid single point of failure, Xiao Zhang has applied for two more servers (although it may seem wasteful). Now we need to scale up the existing services. In fact, what we need to do on the new servers remains the same: deploy Nginx to provide access to the index.html file, and even the configuration files are exactly the same. As you can see, in this case, adding a server requires us to do the same thing all over again.

With the idea of not wasting time doing repetitive work, Xiao Zhang thought about using Ansible to manage server operations and configurations. However, considering that we will need to deploy other services on the servers later, conventional deployment methods can easily interfere with each other’s environments.

So we thought about using virtualization technology, but based on general experience, virtualization technologies like KVM may consume more resources and are not as lightweight. On the other hand, containerization is relatively lightweight and meets our expectations. It allows us to build once and run anywhere. We chose Docker, which is currently the most popular choice.

Now that we have made the technology selection, it’s simple. We need to install Docker on our three servers. The installation process will not be described here, but you can refer to the official Docker installation documentation.

At this point, all we need to do is to build our service into an image. We need to write a Dockerfile, build an image, and deploy it to each server.

After all this discussion, we have now run our service in a container, and Docker is the container runtime we chose. The main reasons for choosing it are for environment isolation and to avoid repetitive work.

In the context of a Kubernetes (K8S) cluster, if Docker corresponds to the concept of a container runtime, there are also other options, such as rkt, runc, and other runtimes that implement the OCI specification.

Summary #

In this section, we learned that Node is actually a server used for work, which has some states and information. When these conditions meet certain criteria, Node is in a Ready state and can be used to perform subsequent work.

Deployment can be understood as a description of the desired state. As the smallest schedulable unit in the cluster, Pods will be explained in detail later.

Docker is the container runtime we choose, which can run the service image we built, reduce the repetitive work in terms of the environment, and is also very easy to deploy. Besides Docker, there are other container runtimes available.

Now that we have learned these basic concepts, in the next section, we will have an overall understanding of the architecture of K8S from a macroscopic perspective, so that we can continue our learning and practice.