01 Introduction to Kubernetes What Is It and Why We Need It

01 Introduction to Kubernetes - What Is It and Why We Need It #

Kubernetes is an extensible platform used for containerized application orchestration and management. It was open-sourced by Google in 2014 based on its extensive production experience. Kubernetes has now become the de facto standard in the field of container orchestration and has a very active community.

Kubernetes is widely used both domestically and internationally. It is being adopted by major companies such as Google, Amazon, GitHub, as well as domestic companies like Alibaba, Tencent, Baidu, Huawei, JD.com, and many others, who are actively promoting its use in production.

Today, whether you are in operations, backend development, database administration, frontend development, or machine learning engineering, Docker is being used to some extent in your work. And if it is widely used in production, Kubernetes will also become a trend. Therefore, understanding or mastering Kubernetes has become an essential skill for engineers.

What is Kubernetes? #

When it comes to Kubernetes, most people may think of it as a container orchestration tool or a Platform as a Service (PaaS) system. However, Kubernetes is not a PaaS system. It operates at the container layer instead of the hardware layer, providing some similar or common functionalities like deployment, scaling, monitoring, load balancing, and logging. However, it is not a fully integrated platform, and these functionalities are mostly optional and configurable.

Kubernetes can support public cloud, private cloud, and hybrid cloud environments, making it highly portable. We can directly use Kubernetes or build our own container/cloud platform on top of it to achieve fast deployment, scaling, and optimized resource utilization.

It aims to provide generic interfaces such as Container Network Interface (CNI), Container Storage Interface (CSI), and Container Runtime Interface (CRI), allowing more possibilities and enabling more vendors to join its ecosystem. Its goal is to make it easy for any team to extend and build their own platform without having to modify the core Kubernetes code in the future.

Why do we need Kubernetes #

Let’s get back to the practical work environment.

  • If you are a front-end developer, have you ever encountered slow npm dependency installation, or issues installing or incompatible versions of node-sass?
  • If you are a back-end developer, have you ever encountered situations where the server environment differs from the local environment, resulting in unexpected behavior of certain features?
  • If you are an operations engineer, have you ever encountered frequent deployment of environments, but experienced issues with installation or incompatible versions?

Currently, the best solution to these problems is standardization and containerization, with Docker being the most commonly used solution. Docker describes the environment using Dockerfiles and delivers it using images, eliminating the need to worry about environment inconsistencies.

During job interviews, regardless of front-end or back-end, we often ask if candidates are familiar with or have used Docker. If they have, we may ask how to perform container orchestration, deployment, and scaling in production as the scale grows.

At this point, most people are unable to provide an answer. On one hand, non-operations-related professionals may not have a comprehensive understanding of the overall architecture and lack relevant knowledge and experience in their actual work. On the other hand, operations professionals may not have encountered this aspect yet.

As technical professionals, we should have an understanding of the overall system architecture, master more skills, and understand the complete lifecycle of software, including development, delivery, deployment, and scaling when the traffic increases.

In the field of container orchestration, there are three famous ones: Kubernetes, Mesos, and Docker’s own Swarm. Among these three, Swarm is the simplest, as it only focuses on container orchestration and is developed by the official Docker team, making it relatively beginner-friendly. However, if it is used for large-scale production, it may not be as powerful as the others.

Mesos is not limited to container orchestration. It was created to abstract all resources in a data center, such as CPU, memory, network, and storage. The entire Mesos cluster is treated as a large resource pool, allowing various frameworks to schedule resources. For example, Marathon can be used to implement PaaS, and Spark or Hadoop can be run for computation. Because it supports resource isolation using technologies like Docker or LXC, it has been widely used for container orchestration in recent years.

With Kubernetes gaining more recognition than Mesos, Docker Swarm, and others, it is undoubtedly the preferred choice for managing container applications in production environments.

The goal of this guide is to help more developers (not limited to operations, backend, and frontend developers) understand and master the basic skills of Kubernetes, as well as gain knowledge of its underlying architecture. However, Kubernetes covers a wide range of topics and is rapidly evolving. This guide focuses on using concise language to help everyone quickly grasp the essential skills of Kubernetes and apply them in production practice. It does not dive too deep into the knowledge of Docker and Linux kernel. The main approach is to start with the most common cases and help everyone acquire the necessary knowledge faster and apply it to real-world scenarios.