05 Hands-On Practice Building a Production Ready Kubernetes Cluster #
In the previous lesson, we quickly set up a local Kubernetes cluster using Minikube
. By default, the cluster consists of a single virtual machine instance where we can experience some basic functionalities.
However, most people have requirements that go beyond a local testing cluster with just a single virtual machine node. They need a production-ready cluster that can run on servers, scale up and down on demand, and have all the necessary features.
Building a Kubernetes cluster has always been a headache for many people. In this lesson, we will build a production-ready cluster that will be useful for future learning or usage.
Solution Selection #
There are many cluster solutions available for K8S production environments. In this section, we will choose a cluster solution recommended by Kubernetes, called kubeadm
, for deployment.
kubeadm
is a CLI tool provided by Kubernetes. It can easily set up a minimal and production-ready cluster that follows the official best practices. When using kubeadm
to build a cluster, the cluster can pass the consistency test of K8S. Additionally, kubeadm
also supports other cluster lifecycle functions, such as upgrading/downgrading.
We choose kubeadm
here because we don’t need to pay too much attention to the internal details of the cluster, and we can quickly build a production-ready cluster. Through further chapters, we can quickly get started with K8S and learn the internal principles of K8S. Based on this knowledge, it will be easy for us to build a cluster step by step on physical machines.
Installing basic components #
Preparations #
Before using kubeadm
, we need to do some preparation work.
- We need to disable swap. As we learned before, each node has a necessary component called
kubelet
. Starting from K8S 1.8,swap
needs to be disabled when startingkubelet
. Alternatively, the startup parameter--fail-swap-on=false
ofkubelet
needs to be modified.
Although it is possible to modify the parameters to make it available, I suggest disabling swap
unless your cluster has special requirements, such as the need for large memory usage while wanting to save costs, or if you know what you’re doing. Otherwise, unexpected situations may occur, especially when memory limits are applied. When a Pod reaches its memory limit, it may overflow into swap, which can cause K8S to fail to schedule properly.
How to disable swap:
- Use
sudo cat /proc/swaps
to verify the device and file configurations for swap. - Turn off swap with
swapoff -a
. - Use
sudo blkid
orsudo lsblk
to view the device properties. Please note the information in the output with the labelswap
. - Remove all mount points related to swap in
/etc/fstab
that were shown in the previous command’s output to prevent re-mounting theswap
partition on machine restart or remounting.
After performing the above operations, swap will be disabled. You can use the above command or the free
command to confirm if there is still any swap remaining.
[root@master ~]# free
total used free shared buff/cache available
Mem: 1882748 85608 1614836 16808 182304 1630476
Swap: 0 0 0
- Use
sudo cat /sys/class/dmi/id/product_uuid
to view the machine’sproduct_uuid
. Make sure theproduct_uuid
of all nodes in the cluster you are building are different. Also, the MAC addresses of all nodes must be different. You can view this information throughip a
orifconfig -a
.
As mentioned in Chapter 2, each Node has some information that will be recorded in the cluster. In this case, the unique information needs to be recorded in the cluster’s nodeInfo
, such as product_uuid
, which is represented by systemUUID
in the cluster. We can obtain specific information through the API Server
of the cluster, which will be detailed in later chapters.
- In Chapter 3, we mentioned that K8S is a C/S architecture and will listen on some fixed ports for service provisioning after startup. You can use
sudo netstat -ntlp |grep -E '6443|23[79,80]|1025[0,1,2]'
to check if these ports are occupied. If they are occupied, please release them manually.
If you encounter a command not found
prompt when running the above command, it means you need to install netstat
first. On CentOS systems, you need to install it with sudo yum install net-tools
, while on Debian/Ubuntu systems, you need to install it with sudo apt install net-tools
.
- As mentioned earlier, we need a container runtime, usually Docker. You can follow the instructions in the official Docker documentation to install Docker. After installation, remember to start the service.
The official recommendation is to use 17.03
, but I suggest directly installing 18.03
or newer versions, as Docker version 17.03
reached its end of life (EOL) in March 2018. For newer versions of Docker, although they have not been extensively tested in K8S, they have many bug fixes and new features, which can avoid some potential issues (e.g., containers not being automatically deleted in some cases (17.09)).
In addition, since Docker’s API has versions and good compatibility, when using a lower version API request, it will automatically degrade, so there should generally be no issues.
Installing kubectl #
In Chapter 3, we mentioned that kubectl
is the client of the cluster. Now that we are building the cluster, we must install it to verify the cluster’s functionality.
The installation steps are detailed in Chapter 4 and will not be repeated here. Please refer to Chapter 4 or the following information for installation instructions.
Installing kubeadm and kubelet #
First, let’s choose the version. We can get the current stable version number with the following command. To access this address, you need to handle the network issues yourself or use the solution I provide later. [root@master ~]# curl -sSL https://dl.k8s.io/release/stable.txt v1.11.3
Download the binary package and verify the version using `kubeadm version`.
[root@master ~]# curl -sSL https://dl.k8s.io/release/v1.11.3/bin/linux/amd64/kubeadm > /usr/bin/kubeadm
[root@master ~]# chmod a+rx /usr/bin/kubeadm
[root@master ~]# kubeadm version
kubeadm version: &version.Info{Major:"1", Minor:"11", GitVersion:"v1.11.3", GitCommit:"a4529464e4629c21224b3d52edfe0ea91b072862", GitTreeState:"clean", BuildDate:"2018-09-09T17:59:42Z", GoVersion:"go1.10.3", Compiler:"gc", Platform:"linux/amd64"}
Alternatively, we can directly go to the official [GitHub repository](https://github.com/kubernetes/kubernetes) of Kubernetes, find the desired version [v1.11.3](https://github.com/kubernetes/kubernetes/blob/master/CHANGELOG-1.11.md#v1113), and download the "Server Binaries", as shown below:
![img](../images/1660f7b43d86c139)
To download via the terminal, use the following command:
[root@master tmp]# wget -q https://dl.k8s.io/v1.11.3/kubernetes-server-linux-amd64.tar.gz
**For users in China, I have prepared an alternate method for your convenience.**
Link: https://pan.baidu.com/s/1FSEcEUplQQGsjyBIZ6j2fA Password: cu4s
After downloading, verify the file to ensure its correctness, and then unpack it:
[root@master tmp]# echo 'e49d0db1791555d73add107d2110d54487df538b35b9dde0c5590ac4c5e9e039 kubernetes-server-linux-amd64.tar.gz' | sha256sum -c -
kubernetes-server-linux-amd64.tar.gz: OK
[root@master tmp]# tar -zxf kubernetes-server-linux-amd64.tar.gz
[root@master tmp]# ls kubernetes
addons kubernetes-src.tar.gz LICENSES server
[root@master tmp]# ls kubernetes/server/bin/ | grep -E 'kubeadm|kubelet|kubectl'
kubeadm
kubectl
kubelet
As you can see, all the required content is available in the `server/bin/` directory. Move the required files such as `kubeadm`, `kubectl`, `kubelet`, etc., to the `/usr/bin` directory:
[root@master tmp]# mv kubernetes/server/bin/kube{adm,ctl,let} /usr/bin/
[root@master tmp]# ls /usr/bin/kube*
/usr/bin/kubeadm /usr/bin/kubectl /usr/bin/kubelet
[root@master tmp]# kubeadm version
kubeadm version: &version.Info{Major:"1", Minor:"11", GitVersion:"v1.11.3", GitCommit:"a4529464e4629c21224b3d52edfe0ea91b072862", GitTreeState:"clean", BuildDate:"2018-09-09T17:59:42Z", GoVersion:"go1.10.3", Compiler:"gc", Platform:"linux/amd64"}
[root@master tmp]# kubectl version --client
Client Version: version.Info{Major:"1", Minor:"11", GitVersion:"v1.11.3", GitCommit:"a4529464e4629c21224b3d52edfe0ea91b072862", GitTreeState:"clean", BuildDate:"2018-09-09T18:02:47Z", GoVersion:"go1.10.3", Compiler:"gc", Platform:"linux/amd64"}
[root@master tmp]# kubelet --version
Kubernetes v1.11.3
As shown, all the required components are at version `v1.11.3`.
Configuration #
In order to ensure the stable operation of various components in the production environment and facilitate management, we add the systemd
configuration for kubelet
to manage the service.
Configure kubelet #
[root@master tmp]# cat <<'EOF' > /etc/systemd/system/kubelet.service
[Unit]
Description=kubelet: The Kubernetes Agent
Documentation=http://kubernetes.io/docs/
[Service]
ExecStart=/usr/bin/kubelet
Restart=always
StartLimitInterval=0
RestartSec=10
[Install]
WantedBy=multi-user.target
EOF
[root@master tmp]# mkdir -p /etc/systemd/system/kubelet.service.d
[root@master tmp]# cat <<'EOF' > /etc/systemd/system/kubelet.service.d/kubeadm.conf
[Service]
Environment="KUBELET_KUBECONFIG_ARGS=--bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --kubeconfig=/etc/kubernetes/kubelet.conf"
Environment="KUBELET_CONFIG_ARGS=--config=/var/lib/kubelet/config.yaml"
EnvironmentFile=-/var/lib/kubelet/kubeadm-flags.env
EnvironmentFile=-/etc/default/kubelet
ExecStart=
ExecStart=/usr/bin/kubelet $KUBELET_KUBECONFIG_ARGS $KUBELET_CONFIG_ARGS $KUBELET_KUBEADM_ARGS $KUBELET_EXTRA_ARGS
EOF
[root@master tmp]# systemctl enable kubelet
Here, we add the systemd configuration for kubelet
, and then add its Drop-in
file. The kubeadm.conf
file we added will be automatically parsed by systemd to override the basic systemd configuration of kubelet
. As you can see, we added a series of configuration parameters. We will provide a detailed explanation of kubelet
in Chapter 17.
Start #
At this point, we have completed the preliminary preparations and we can use kubeadm
to create a cluster. Before we do that, we still need to install two tools called crictl
and socat
.
Install the prerequisite tool crictl
#
crictl
is included in the cri-tools
project, which contains two tools:
crictl
is the CLI forkubelet
CRI (Container Runtime Interface).critest
is the testing toolset forkubelet
CRI.
You can install cri-tools
by going to the Release page of the cri-tools
project, selecting the appropriate installation package based on the version compatibility specified in the project’s README file, and downloading it. Since we are installing K8S 1.11.3, we will choose the latest v1.11.x installation package.
[root@master ~]# wget https://github.com/kubernetes-sigs/cri-tools/releases/download/v1.11.1/crictl-v1.11.1-linux-amd64.tar.gz
[root@master ~]# echo 'ccf83574556793ceb01717dc91c66b70f183c60c2bbec70283939aae8fdef768 crictl-v1.11.1-linux-amd64.tar.gz' | sha256sum -c -
crictl-v1.11.1-linux-amd64.tar.gz: OK
[root@master ~]# tar zxvf crictl-v1.11.1-linux-amd64.tar.gz
[root@master ~]# mv crictl /usr/bin/
Install the prerequisite tool socat
#
socat
is a powerful command line tool that can establish two bidirectional byte streams and transfer data between them. Simply put, one of its functions is to enable port forwarding.
Whether in K8S or Docker, port forwarding is an essential part when we need to access services from the outside. Of course, you may wonder that socat
is not mentioned as a prerequisite in many places. Don’t worry, if we take a look at the K8S source code, we will know why.
func portForward(client libdocker.Interface, podSandboxID string, port int32, stream io.ReadWriteCloser) error {
// omitted code unrelated to socat
socatPath, lookupErr := exec.LookPath("socat")
if lookupErr != nil {
return fmt.Errorf("unable to do port forwarding: socat not found.")
}
args := []string{"-t", fmt.Sprintf("%d", containerPid), "-n", socatPath, "-", fmt.Sprintf("TCP4:localhost:%d", port)}
// ...
}
Installing socat
is very simple. On CentOS, execute sudo yum install -y socat
. On Debian/Ubuntu, execute sudo apt-get install -y socat
to complete the installation.
Initialize the cluster #
All the preparations have been completed, and we are ready to create a Kubernetes cluster. Note: If you need to configure a pod networking solution, please read the last section of this chapter, “Configure cluster networking”.
[root@master ~]# kubeadm init
[init] using Kubernetes version: v1.11.3
[preflight] running pre-flight checks
… I0920 01:09:09.602908 17966 kernel_validator.go:81] Validating kernel version I0920 01:09:09.603001 17966 kernel_validator.go:96] Validating kernel config [WARNING SystemVerification]: docker version is greater than the most recently validated version. Docker version: 18.03.1-ce. Max validated version: 17.03 [preflight/images] Pulling images required for setting up a Kubernetes cluster [preflight/images] This might take a minute or two, depending on the speed of your internet connection [preflight/images] You can also perform this action in beforehand using ‘kubeadm config images pull’ [kubelet] Writing kubelet environment file with flags to file “/var/lib/kubelet/kubeadm-flags.env” [kubelet] Writing kubelet configuration to file “/var/lib/kubelet/config.yaml” [preflight] Activating the kubelet service [certificates] Generated ca certificate and key. [certificates] Generated apiserver certificate and key. … [markmaster] Marking the node master as master by adding the label “node-role.kubernetes.io/master=’’” [markmaster] Marking the node master as master by adding the taints [node-role.kubernetes.io/master:NoSchedule] [bootstraptoken] creating the “cluster-info” ConfigMap in the “kube-public” namespace [addons] Applied essential addon: CoreDNS [addons] Applied essential addon: kube-proxy
Your Kubernetes master has initialized successfully!
…
You can now join any number of machines by running the following on each node as root:
kubeadm join 202.182.112.120:6443 –token t14kzc.vjurhx5k98dpzqdc –discovery-token-ca-cert-hash sha256:d64f7ce1af9f9c0c73d2d737fd0095456ad98a2816cb5527d55f984c8aa8a762
The above logs have omitted some output.
From the above log, we can see that when creating the cluster, it checks the kernel version, Docker version, and other information. Here, it indicates that the Docker version is higher, and we can ignore this warning.
Then, it will download some images. Of course, it reminds us that we can use kubeadm config images pull
to perform this action in advance. Let’s take a look at the following command:
[root@master ~]# kubeadm config images pull [config/images] Pulled k8s.gcr.io/kube-apiserver-amd64:v1.11.3 [config/images] Pulled k8s.gcr.io/kube-controller-manager-amd64:v1.11.3 [config/images] Pulled k8s.gcr.io/kube-scheduler-amd64:v1.11.3 [config/images] Pulled k8s.gcr.io/kube-proxy-amd64:v1.11.3 [config/images] Pulled k8s.gcr.io/pause:3.1 [config/images] Pulled k8s.gcr.io/etcd-amd64:3.2.18 [config/images] Pulled k8s.gcr.io/coredns:1.1.3
For Chinese users using kubeadm
to create a cluster, the problem they might encounter is that these images cannot be downloaded, resulting in a failure to create the cluster. Therefore, I have provided a repository on a Chinese code hosting platform where users can clone the project, enter the v1.11.3
directory, and execute sudo docker load -i xx.tar
for each tar
file to import the images.
Alternatively, you can use the images provided by Alibaba Cloud at https://dev.aliyun.com/list.html?namePrefix=google-containers, and simply replace k8s.gcr.io
with registry.aliyuncs.com/google_containers
. After executing docker pull
, you can then retag the image.
Continuing with the previous log, kubeadm init
generates some files, which we previously configured in the drop-in configuration of the kubelet server.
After generating these configuration files, the kubelet service is started, and a series of certificates and related configurations are generated and added with some extensions.
Finally, the cluster is successfully created, and you are informed that you can use a specified command to join any machine to the cluster.
Verification #
In the previous steps, we have installed the CLI tool kubectl
for K8S, and we use this tool to view cluster information:
[root@master ~]# kubectl cluster-info
Kubernetes master is running at http://localhost:8080
To further debug and diagnose cluster problems, use 'kubectl cluster-info dump'.
The connection to the server localhost:8080 was refused - did you specify the right host or port?
[root@master ~]# kubectl get nodes
The connection to the server localhost:8080 was refused - did you specify the right host or port?
The command kubectl cluster-info
can be used to view the address of the cluster master and cluster services. However, we also notice that there is an error message at the end, “connection ... refused
”. It is evident that there is an error here.
The command kubectl get nodes
can be used to view the Node
information in the cluster, but it also returns an error.
As mentioned earlier, K8S listens on default ports, but not on port 8080
. This indicates that our kubectl
configuration is incorrect.
Configure kubectl #
- Use the
--kubeconfig
parameter ofkubectl
or the environment variableKUBECONFIG
.
[root@master ~]# kubectl --kubeconfig /etc/kubernetes/admin.conf get nodes
NAME STATUS ROLES AGE VERSION
master NotReady master 13h v1.11.3
[root@master ~]#
[root@master ~]# KUBECONFIG=/etc/kubernetes/admin.conf kubectl get nodes
NAME STATUS ROLES AGE VERSION
master NotReady master 13h v1.11.3
- Using the command-line parameters can be cumbersome, so we can also change the default configuration file.
[root@master ~]# mkdir -p $HOME/.kube
[root@master ~]# sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
[root@master ~]# sudo chown $(id -u):$(id -g) $HOME/.kube/config
[root@master ~]# kubectl get nodes
NAME STATUS ROLES AGE VERSION
master NotReady master 13h v1.11.3
Configure Cluster Networking #
Through the above configuration, we can now obtain Node
information successfully. But from Chapter 2, we learned that Node
has a status
, and currently, our only Node
is NotReady
. By passing the -o
parameter to kubectl
, we can change the output format and view more detailed information.
[root@master ~]# kubectl get nodes -o yaml
apiVersion: v1
items:
- apiVersion: v1
kind: Node
...
status:
addresses:
- address: master
type: Hostname
...
- lastHeartbeatTime: 2018-09-20T14:45:45Z
lastTransitionTime: 2018-09-20T01:09:48Z
message: 'runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady
message:docker: network plugin is not ready: cni config uninitialized'
reason: KubeletNotReady
status: "False"
type: Ready
...
From the above output, we can see that the master is NotReady
due to the reason “network plugin is not ready: cni config uninitialized
”. So, what is CNI
? CNI
stands for Container Network Interface, which is the interface specification used by K8S to configure Linux container networks.
We will not go into detail about the choice of network here. For now, we will temporarily choose a widely used solution, flannel
. However, note that if we want to use flannel
, we need to pass the --pod-network-cidr=10.244.0.0/16
parameter to kubeadm init
. Also, we need to check if /proc/sys/net/bridge/bridge-nf-call-iptables
is already set to 1
. You can change the configuration with sysctl net.bridge.bridge-nf-call-iptables=1
.
Since we did not pass any parameters when creating the cluster earlier, we need to reset the cluster to use flannel
. We can use kubeadm reset
.
[root@master ~]# kubeadm reset
[reset] WARNING: changes made to this host by 'kubeadm init' or 'kubeadm join' will be reverted.
[reset] are you sure you want to proceed? [y/N]: y
[preflight] running pre-flight checks
[reset] stopping the kubelet service
[reset] unmounting mounted directories in "/var/lib/kubelet"
[reset] removing kubernetes-managed containers
[reset] cleaning up running containers using crictl with socket /var/run/dockershim.sock
[reset] failed to list running pods using crictl: exit status 1. Trying to use docker instead[reset] deleting contents of stateful directories: [/var/lib/kubelet /etc/cni/net.d /var/lib/dockershim /var/run/kubernetes /var/lib/etcd]
[reset] deleting contents of config directories: [/etc/kubernetes/manifests /etc/kubernetes/pki]
[reset] deleting files: [/etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/bootstrap-kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf]
Reinitialize the cluster and pass parameters.
[root@master ~]# kubeadm init --pod-network-cidr=10.244.0.0/16
[init] using Kubernetes version: v1.11.3
...
Your Kubernetes master has initialized successfully!
Note: Here, the corresponding certificates and configurations will be regenerated, so you need to reconfigure kubectl
as mentioned above.
At this point, CNI
has not been initialized yet, so we need to complete the following steps.
# Note that the flannel configuration here is only applicable to K8S 1.11, if you are installing a different version of K8S, you need to replace this link
[root@master ~]# kubectl apply -f https://raw.githubusercontent.com/coreos/flannel/v0.10.0/Documentation/kube-flannel.yml
clusterrole.rbac.authorization.k8s.io/flannel created
clusterrolebinding.rbac.authorization.k8s.io/flannel created
serviceaccount/flannel created
configmap/kube-flannel-cfg created
daemonset.extensions/kube-flannel-ds created
Wait a moment and check the Node status again:
[root@master ~]# kubectl get nodes
NAME STATUS ROLES AGE VERSION
master Ready master 12m v1.11.3
You can see that the status is now Ready
. According to Chapter 3, we know that the smallest scheduling unit in K8S is Pod
. Let’s check the status of the existing Pods
in the cluster:
[root@master ~]# kubectl get pods --all-namespaces
NAMESPACE NAME READY STATUS RESTARTS AGE
kube-system coredns-78fcdf6894-h7pkc 0/1 ContainerCreating 0 12m
kube-system coredns-78fcdf6894-lhlks 0/1 ContainerCreating 0 12m
kube-system etcd-master 1/1 Running 0 5m
kube-system kube-apiserver-master 1/1 Running 0 5m
kube-system kube-controller-manager-master 1/1 Running 0 5m
kube-system kube-flannel-ds-tqvck 1/1 Running 0 6m
kube-system kube-proxy-25tk2 1/1 Running 0 12m
kube-system kube-scheduler-master 1/1 Running 0 5m
We can see that two coredns
Pods are in the ContainerCreating
state and not ready. As mentioned in Chapter 3, Pod
actually goes through a scheduling process, which we will not discuss for now. We will explain it in later chapters.
Add a new Node #
Following the information given after executing kubeadm init
, execute the kubeadm join
command on the new machine.
[root@node1 ~]# kubeadm join 202.182.112.120:6443 --token t14kzc.vjurhx5k98dpzqdc --discovery-token-ca-cert-hash sha256:d64f7ce1af9f9c0c73d2d737fd0095456ad98a2816cb5527d55f984c8aa8a762
[preflight] running pre-flight checks
[WARNING RequiredIPVSKernelModulesAvailable]: the IPVS proxier will not be used, because the following required kernel modules are not loaded: [ip_vs ip_vs_rr ip_vs_wrr ip_vs_sh] or no builtin kernel ipvs support: map[ip_vs:{} ip_vs_rr:{} ip_vs_wrr:{} ip_vs_sh:{} nf_conntrack_ipv4:{}]
you can solve this problem with following methods:
1. Run 'modprobe -- ' to load missing kernel modules;
2. Provide the missing builtin kernel ipvs support
I0921 04:00:54.805439 10677 kernel_validator.go:81] Validating kernel version
I0921 04:00:54.805604 10677 kernel_validator.go:96] Validating kernel config
[WARNING SystemVerification]: docker version is greater than the most recently validated version. Docker version: 18.03.1-ce. Max validated version: 17.03
[discovery] Trying to connect to API Server "202.182.112.120:6443"
[discovery] Created cluster-info discovery client, requesting info from "https://202.182.112.120:6443"
[discovery] Requesting info from "https://202.182.112.120:6443" again to validate TLS against the pinned public key
[discovery] Cluster info signature and contents are valid and TLS certificate validates against pinned roots, will use API Server "202.182.112.120:6443"
[discovery] Successfully established connection with API Server "202.182.112.120:6443"
[kubelet] Downloading configuration for the kubelet from the "kubelet-config-1.11" ConfigMap in the kube-system namespace
[kubelet] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
[kubelet] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
[preflight] Activating the kubelet service
[tlsbootstrap] Waiting for the kubelet to perform the TLS Bootstrap...
[patchnode] Uploading the CRI Socket information "/var/run/dockershim.sock" to the Node API object "node1" as an annotation
This node has joined the cluster:
* Certificate signing request was sent to master and a response
was received.
* The Kubelet was informed of the new secure connection details.
Run 'kubectl get nodes' on the master to see this node join the cluster.
The above command is completed and it indicates that the join was successful. Now, let’s check the current cluster status on the master:
[root@master ~]# kubectl get nodes
NAME STATUS ROLES AGE VERSION
master Ready master 12m v1.11.3
node1 Ready <none> 7m v1.11.3
You can see that node1
has joined the cluster.
Summary #
In this section, we used the official recommended tool kubeadm
to build a two-node cluster on the server.
kubeadm
can automatically pull Docker images for the relevant components and organize them, saving us the trouble of deploying each component one by one.
Firstly, we learned about the basic configuration of the system when deploying K8S, and then installed some necessary tools to ensure the normal operation of K8S.
Secondly, we learned about CNI-related knowledge and deployed the flannel
network scheme in the cluster.
Finally, we learned how to add Nodes to expand the cluster in the future.
The learning of cluster building temporarily comes to an end, but this is not the end, it is just the beginning. From the next chapter, we will learn about cluster management and how to truly use K8S.