01 First Lecture on Cloud Native Course

01 First Lecture on Cloud Native Course #

Key Points of this Lesson

  1. The development process of cloud native technology (Why should we learn this course)
  2. Course introduction and prerequisite knowledge (What does this course teach)
  3. Definition and technical highlights of cloud native (The main content of this lesson)

Why is it necessary to offer open courses on cloud-native technology? #

Brief History of Cloud-Native Technology Development #

First, let’s address the question of why open courses on cloud-native technology are needed. While cloud-native and CNCF (Cloud Native Computing Foundation) are currently popular keywords, these technologies are not entirely novel.

  • In 2004-2007, Google had already been using container technologies like Cgroups on a large scale internally.
  • In 2008, Google merged Cgroups into the mainline Linux kernel.
  • In 2013, the Docker project was officially launched.
  • In 2014, the Kubernetes project was also released. The reason for this is quite understandable. With the advent of containers and Docker, there was a need for a way to manage these containers conveniently, quickly, and elegantly. This is the underlying purpose of the Kubernetes project. After Google and Red Hat released Kubernetes, the project developed rapidly.
  • In 2015, the CNCF (Cloud Native Computing Foundation) was founded by Google, Red Hat, Microsoft, and other large cloud computing companies, as well as some open-source companies. When CNCF was established, it had 22 founding members, and Kubernetes became the first open-source project hosted by CNCF. Since then, CNCF has grown rapidly.
  • In 2017, CNCF had 170 members and 14 foundation projects.
  • In 2018, CNCF celebrated its third anniversary with 195 members, 19 foundation projects, and 11 incubation projects. Such rapid development speed is rare in the field of cloud computing.

Current State of the Cloud-Native Technology Ecosystem #

Therefore, the cloud-native technology ecosystem we are discussing today is a vast collection of technology. CNCF has a landscape diagram of cloud-native technologies (https://github.com/cncf/landscape), which already includes over 200 projects and products that align with CNCF’s perspective. So, if we take this landscape diagram as the background for our thinking, we can understand that the focus of our discussion on cloud-native technology today revolves around the following points:

  1. Cloud Native Computing Foundation (CNCF)
  2. Cloud-native technology communities, such as the 20+ official projects hosted by CNCF, collectively constitute the foundation of modern cloud computing ecosystems. Notably, projects like Kubernetes have become the 4th most active open-source project in the world.
  3. Besides the above two points, major public cloud providers worldwide have already embraced Kubernetes. Moreover, over 100 technology startups continue to invest in cloud-native technology. Alibaba is also discussing the comprehensive move to the cloud, and going cloud-native is an example of how major technology companies are adopting cloud-native.

We Are Currently at a Crucial Turning Point in Time #

In 2019, we are at a crucial turning point in the cloud-native era. Why do we say that? Let’s briefly summarize. Starting from the release of the Docker project in 2013, Docker made it possible to easily obtain a full operating system sandbox technology, enabling users to package their applications more comprehensively and providing developers with an easy way to obtain the smallest runnable unit of an application without depending on any PaaS capabilities. This presented a significant challenge to the classic PaaS industry. In 2014, when the Kubernetes project was released, it signified Google’s “rebirth” of the internal Borg/Omega system thinking through the open-source community and introduced the concept of “container design patterns.” The reason why Google chose to indirectly open-source Kubernetes instead of directly open-sourcing the Borg project is relatively easy to understand: Borg/Omega was too complex and could not be made available to people outside of Google. However, the design thinking behind Borg/Omega could be exposed through Kubernetes, which explains the important background behind the open-sourcing of Kubernetes. From 2015 to 2016, it was the era of “Three-Ruling-the-Container-Orchestration-Domain.” Docker, Swarm, Mesos, and Kubernetes competed in the orchestration field. The reason for their competition was quite straightforward: while containers themselves were valuable, in order to generate commercial value or provide value to the cloud, they needed to occupy a favorable position in orchestration. Swarm and Mesos were characterized by their respective strengths—Swarm leaned more towards the ecosystem, while Mesos focused more on technical capabilities. In comparison, Kubernetes possessed the advantages of both, and eventually emerged as the winner in the 2017 “Three-Ruling” scenario, becoming the container orchestration standard from that time until now. A representative event of this process was Docker’s announcement that Kubernetes services would be built into its core product, while the Swarm project gradually ceased maintenance. By 2018, the concept of cloud-native technology began to grow. This was because Kubernetes, along with containers, had become the established standards for cloud providers, and software development philosophies centered around the “cloud” were gradually taking shape. Now in 2019, it seems that some changes are on the horizon.

2019: The Year of Widespread Cloud-Native Technology Adoption #

Why do we believe that 2019 is likely to be a crucial turning point? We believe that 2019 will be the year of widespread adoption of cloud-native technology. Firstly, we can see that in 2019, Alibaba announced its comprehensive move to the cloud and stressed the importance of going cloud-native. We can also observe that the software development philosophy centered around the “cloud” is gradually becoming the default option for all developers. Cloud-native technologies like Kubernetes are becoming mandatory knowledge for technical professionals, and a large number of job opportunities in this field are emerging.

In this background, merely “knowing Kubernetes” is no longer sufficient. The importance of “understanding Kubernetes” and “mastering cloud-native architecture” is becoming increasingly prominent. Starting from 2019, cloud-native technology will be massively adopted, which is why it is crucial for everyone to learn and invest in cloud-native technology at this time.

What is the “Cloud Native Technology Open Course” like? #

Based on the mentioned technology trends, Alibaba and CNCF jointly launched the Cloud Native Technology Open Course. So what exactly does this open course cover?

Course Curriculum #

The curriculum of the first phase of the Cloud Native Open Course mainly focuses on application containers and Kubernetes, and subsequent courses on topics such as Service Mesh and Serverless will be launched. In the first phase of the open course, we first divide the course into two parts - the fundamental knowledge part and the advanced knowledge part:

  • First, we hope to help everyone consolidate their foundation through the first part of the course. Then, we will delve into more advanced topics and analyze code at a deeper level. We hope to help everyone learn cloud native technology in a progressive manner;
  • Second, after each course, our instructors will set corresponding self-assessment test questions. These test questions are actually the most effective summary of the course. Through post-course evaluations, we hope to help everyone summarize the key points and build their own cloud native knowledge system;
  • Finally, our instructors have designed cloud-based practices behind each knowledge point. As the saying goes, “practice makes perfect”, learning computer-related knowledge still requires hands-on practice. In the cloud-based practice part, the instructors will provide detailed practice steps for everyone to self-study after the course. In this section, Alibaba Cloud will also provide a certain amount of vouchers to help everyone practice on the cloud more effectively.

The above three parts constitute the teaching content of the Cloud Native Technology Open Course launched by Alibaba Cloud and CNCF.

Course Schedule #

In terms of the course schedule, the initial plan is as follows: the first class will be launched in September 2019, and after that, 2 classes will be updated weekly, with a total of 29 sessions. Each knowledge point is followed by a self-assessment test. As for the lineup of instructors, it is also the most proud part of this open course. Our open course will mainly be taught by senior members and project maintainers from the CNCF community. Many of the course instructors are expert engineers from Alibaba Cloud Container Platform team. At the same time, we will also invite senior experts from the cloud native community and external instructors to explain some of the content. Therefore, during the course, we will arrange occasional live broadcasts, course Q&A sessions, and practical case studies. We hope to integrate all these content together to present the most complete, authoritative, and influential Cloud Native Technology Open Course in China.

Prerequisite Knowledge for the Course #

You may have the question of what prerequisite knowledge is needed before learning the fundamentals of cloud native. In general, there are three main areas of prerequisite knowledge:

  1. Knowledge of the Linux operating system: Mainly some general foundational knowledge, with some experience in developing under Linux being preferred;
  2. Fundamentals of computer science and programming: This requires an entry-level engineer or senior undergraduate level of knowledge;
  3. Foundation in using containers: We hope everyone has some experience in using containers, such as docker run and docker build, and preferably some experience in developing Dockerized applications. Of course, we will also explain related basic knowledge in the course.

What is “Cloud Native”? How to implement Cloud Native? #

After introducing the course, let’s talk in detail about “Cloud Native”: what is “Cloud Native”? How to implement Cloud Native? These two questions are also the core content of the entire course.

Definition of Cloud Native #

Many people ask, “What exactly is Cloud Native?” In fact, Cloud Native is a best path or best practice. Specifically, Cloud Native specifies a low cognitive burden, agile, scalable, and replicable path for users to maximize the capabilities and value of the cloud. Therefore, Cloud Native is actually a set of guiding principles for software architecture design. Software designed according to these principles is naturally “born and grown in the cloud.” It can maximize the capabilities of the cloud and enable seamless integration between the software we develop and the “cloud,” thereby maximizing the value of the cloud. So, the greatest value and vision of Cloud Native is to believe that future software will be born in the cloud and follow a new mode of software development, deployment, and operation and maintenance, maximizing the capabilities of the cloud. Now you can think about why container technology is revolutionary.

In fact, container technology is very similar to container shipping revolution because container technology gives applications a “self-contained” definition. Therefore, such applications can be deployed on the cloud in an agile, scalable, and replicable manner, fully leveraging the capabilities of the cloud. This is the revolutionary impact of container technology, which is why container technology is the core foundation of Cloud Native.

Technical Scope of Cloud Native #

The technical scope of Cloud Native includes the following aspects:

  • The first part is the definition and development process of cloud applications. This includes application definition and image creation, configuring CI/CD, messaging and streaming, and databases, among others.
  • The second part is the orchestration and management process of cloud applications. This is also an area of focus for Kubernetes. It includes application orchestration and scheduling, service discovery and governance, remote invocation, API gateway, and service mesh.
  • The third part is monitoring and observability. This part emphasizes how cloud applications can be monitored, log collection, tracing, and how to perform disruptive testing on the cloud, known as chaos engineering.
  • The fourth part is the underlying technologies of Cloud Native, such as container runtimes, Cloud Native storage technologies, Cloud Native networking technologies, etc.
  • The fifth part is the Cloud Native toolset. On top of the core technologies mentioned above, there are many supporting ecological or peripheral tools that need to be used, such as process automation and configuration management, container image repositories, Cloud Native security technologies, and cloud-based password management, among others.
  • Finally, there is the concept of Serverless. Serverless is a special form of PaaS and defines a more “extremely abstract” way of application development, including concepts such as FaaS and BaaS. Both FaaS and BaaS have the typical feature of billing based on actual usage (pay as you go), so understanding Serverless billing is also important.

Two Theoretical Foundations of Cloud Native Thinking #

After understanding the technical scope of Cloud Native, you will find that there are still many technical contents involved, but the technical essence of these contents is similar. The essence of Cloud Native technology is based on two theoretical foundations.

  • The first theoretical foundation is “immutable infrastructure.” This is currently achieved through container images, which means that the infrastructure of an application should be immutable, self-contained, self-descriptive, and can be fully migrated across different environments.
  • The second theoretical foundation is “cloud application orchestration theory.” The current implementation approach is the “container design pattern” proposed by Google, which is the main topic of the Kubernetes part in this course.

The Process of Infrastructure Evolving towards the Cloud #

First, let’s introduce the concept of “immutable infrastructure”. In fact, the infrastructure that applications rely on is also undergoing a process of evolution towards the cloud. For example, traditional application infrastructure is often mutable.

You may often do something like this: for example, when you need to deploy or update a software, the process roughly goes like this: first, connect to the server via SSH, then manually upgrade or downgrade software packages, adjust configuration files one by one on the server, and deploy new code directly to the existing server. Therefore, this infrastructure is constantly adjusted and modified. However, in the cloud, application infrastructure that is friendly to the cloud is immutable.

In this scenario, the update process mentioned above would be done as follows: once the application is deployed, the application infrastructure will not be modified anymore. If an update is needed, the public image needs to be modified to build a new service to directly replace the old service. And the reason why we can achieve direct replacement is because containers provide self-contained environments (containing all the dependencies required for application execution). Therefore, for the application, it does not need to be concerned about what changes have occurred in the container, it only needs to modify the container image itself. Therefore, cloud-friendly infrastructure can be replaced and switched at any time, because containers have the ability of agility and consistency, which is the application infrastructure of the cloud era. So, in summary, the infrastructure of the cloud era is like replaceable “livestock”, which can be replaced at any time, while traditional infrastructure is unique “pets” that require careful nurturing, which highlights the advantages of immutable infrastructure in the cloud era.

The Significance of the Evolution of Infrastructure towards the Cloud #

Therefore, the evolution of infrastructure like this towards “immutable” provides us with two very important advantages.

    1. Consistency and reliability of the infrastructure. The same image, whether opened in the United States, China, or India, looks the same. And the OS environment within it is consistent for the application. For the application, it does not need to be concerned about where the container is running, which is a very important characteristic of infrastructure consistency.
    1. Such images themselves are self-contained and contain all the dependencies required for application execution, so they can be migrated to any location in the cloud.

In addition, cloud-native infrastructure also provides simple and predictable deployment and operations capabilities. With the existence of images, the application itself is self-descriptive, and the entire container that runs through the image can be made self-operated like Kubernetes’ Operator technology, so the entire application itself is a self-contained behavior, which allows it to be migrated to any location in the cloud. This makes the automation of the entire process very easy.

Applications themselves can also scale better, from 1 instance to 100 instances, and then to 10,000 instances, this process is no different for containerized applications. Finally, we can quickly surround the control systems and supporting components through immutable infrastructure. because these components themselves are containerized and conform to the theory of immutable infrastructure. The above are the biggest advantages of immutable infrastructure for users.

Key Technologies of Cloud-Native #

When we look back at the key technologies of cloud-native or the technical theories it depends on, we can see that there are mainly four directions:

  1. How to build self-contained and customizable application images;
  2. Whether it is possible to achieve fast application deployment and isolation capabilities;
  3. Automated management of application infrastructure creation and destruction;
  4. Reproducible control systems and supporting components.

These four key technologies of cloud-native are the four main approaches to implementing cloud-native technologies, and these four technologies are the core knowledge covered in this course’s 17 technical points.

Summary of this section #

  • “Cloud-native” has important significance, and it is essential for technology professionals to upgrade themselves in the cloud era.
  • “Cloud-native” defines the best path for application development and delivery in the cloud era.
  • “Cloud-native” applications are born and thrive in the clouds, with the hope of maximizing the capabilities of the cloud.

Instructor’s Comment #

“The future of software must be born and grown in the cloud” is the core assumption of cloud-native philosophy. The so-called “cloud-native” is actually defining the best path for applications to leverage the capabilities of the cloud and maximize its value. Without the “application” as the carrier, the concept of “cloud-native” cannot be discussed; container technology is one of the important means to implement this concept and revolutionize software delivery.

The Kubernetes project, which is the focus of this cloud-native open course, is the core and key to implementing the entire “cloud-native” concept. It is rapidly becoming the highway that connects the “cloud” and the “application,” delivering applications quickly to any location in the world in a standard and efficient manner. Today, “cloud-native application delivery” has become one of the hottest technology keywords in the 2019 cloud computing market. I hope that the students who take this course can apply what they learn and continue to pay attention to the trend of “cloud-native application management and delivery” based on K8s.