45 Cloud Native Architecture Design Based on Kubernetes

45 Cloud-Native Architecture Design Based on Kubernetes #

Hello, I’m Kong Lingfei.

In the previous two lessons, we have taken a look at the evolution of cloud technology. Software architecture has entered the era of cloud-native, where cloud-native architecture is the most popular software deployment architecture. So in this lesson, I will talk to you about what cloud-native is and how to design a cloud-native deployment architecture based on Kubernetes.

Introduction to Cloud Native #

Cloud native encompasses many concepts. For an application developer, the main focus is on how to develop and deploy applications. Therefore, when I introduce cloud native architecture here, I will mainly focus on the application layer and the system resource layer.

When designing cloud native architecture, we mainly focus on the use of cloud native technology in the application lifecycle management layer, so I will not go into detail about the cloud native architecture of the application lifecycle management layer here. It will be mentioned in the overview of the cloud native architecture later, which you can take a look at.

In addition, when discussing cloud native, it is inevitable to mention the Cloud Native Computing Foundation (CNCF). So, let’s briefly understand CNCF first.

Introduction to CNCF (Cloud Native Computing Foundation) #

CNCF (Cloud Native Computing Foundation) was founded in 2015, led by Google, and currently has over 100 corporate and institutional members, including giants such as Amazon, Microsoft, Cisco, and Red Hat. CNCF is committed to fostering and maintaining a vendor-neutral open-source community ecosystem to promote cloud native technology.

CNCF currently hosts a wide range of open-source projects, many of which are familiar to us, such as Kubernetes, Prometheus, Envoy, Istio, etcd, etc. For more projects, you can refer to the Cloud Native Landscape published by CNCF, as shown in the following figure:

What is Cloud Native? #

In 2018, CNCF officially released Cloud Native v1.0 and provided the following definition:

“Cloud-native technologies empower organizations to build and run scalable applications in modern, dynamic environments such as public, private, and hybrid clouds. Containers, service meshes, microservices, immutable infrastructure, and declarative APIs exemplify this approach. These techniques enable loosely coupled systems that are resilient, manageable, and scalable. Combined with robust automation, they allow engineers to make frequent, predictable updates with confidence.”

In simple terms, Cloud Native is a method of building and running applications. It is a set of technology and methodology. Cloud native includes three concepts: technology stack, methodology, and cloud native application. The entire cloud native technology stack is built around Kubernetes and includes the following core technology stack:

Let me introduce the basic content of these core technology stacks.

  • Containers: The underlying computing engine of Kubernetes, providing containerized computing resources.
  • Microservices: A software architecture concept used to build cloud native applications.
  • Service mesh: Built on top of Kubernetes, it acts as the foundation for communication between services and provides powerful service governance capabilities.
  • Declarative API: A new software development model that describes the desired state of an application to make the system more robust.
  • Immutable infrastructure: A new software deployment model. Once an application instance is created, it can only be rebuilt and cannot be updated. It is the foundation of modern operations.

In Chapter 43 and Chapter 44, I introduced containers, service mesh, and microservices. Here, let me provide additional information on immutable infrastructure and declarative API.

The concept of immutable infrastructure was proposed by Chad Fowler in 2013. Specifically, once an application instance is created, it becomes read-only. If you want to make changes to this application instance, you can only create a new instance. This approach ensures the consistency of application instances, making it easier to implement DevOps and reducing the burden of managing configurations for operations personnel.

Declarative API refers to describing the desired state of an application through tools and ensuring that the application remains in the desired state.

The API design of Kubernetes is a typical example of declarative API. For example, when creating a Deployment, we declare the number of replicas for the application as 2 in the Kubernetes YAML file, i.e., set replicas: 2, and the Deployment Controller will ensure that the number of replicas remains 2. This means that if the current number of replicas is greater than 2, the Deployment Controller will delete the excess replicas, and if the current number of replicas is less than 2, it will create new replicas.

Declarative design is both a design concept and a working mode, making your system more robust. Distributed systems may encounter various uncertain failures. In the face of these component failures, if you use declarative API, you only need to check the corresponding API server status of the component and determine the necessary actions.

What is a Cloud Native Application? #

After introducing what cloud native is, let’s now explain what a cloud native application is.

In general, a cloud native application is an application that is born for the cloud. The application is designed from the beginning to consider the cloud environment and can run on the cloud in the best possible way, fully leveraging and utilizing the various capabilities provided by the cloud platform. Specifically, a cloud native application has the following three characteristics:

  • From the perspective of application lifecycle management, it is developed and delivered using DevOps and CI/CD.
  • From the perspective of the application, it is designed based on the principles of microservices.
  • From the perspective of system resources, it is deployed using Docker + Kubernetes.

After reading the above introduction, you should have a preliminary understanding of cloud native and cloud native applications. Next, I will introduce one implementation of a cloud native architecture. Since there is a lot of content related to cloud native, this introduction only serves as a starting point to give you a basic understanding of cloud native architecture. When designing a cloud native architecture for specific business scenarios, you still need to consider factors such as business requirements, team capabilities, and technology stack.

How to Learn Cloud Native Architecture #

Cloud native architecture encompasses many concepts and technologies. So how do we learn it? In the previous two lectures, I introduced cloud technology from the perspectives of system resource layer, application layer, and application lifecycle management layer. These three layers essentially form the entire technology stack of cloud computing.

Today, I will continue to start from these three layers and give a relatively complete introduction to the design of cloud native architecture. Each layer involves many technologies. In this lecture, I will only introduce the core technologies of each layer and demonstrate how they contribute to the construction of each layer.

In addition to the architectural design at the functional level, we also need to consider the architectural design at the deployment level. For the deployment of cloud native architecture, there are usually two points of focus:

  • Disaster recovery capability: Disaster recovery capability refers to the ability to recover from application failures. In the Internet age, there are higher requirements for the disaster recovery capability of applications. Ideally, when a system fails, it should seamlessly switch to another available instance and continue to provide services without users noticing. However, seamless switching is technically difficult to achieve in practice. As an alternative, it is acceptable for the system to be unavailable for a certain period of time. Typically, this time period needs to be controlled within seconds, such as 5 seconds. Disaster recovery capability can be achieved through load balancing and health checks.

  • Scaling capability: Scaling capability refers to the system’s ability to scale up or down according to demand. Scaling can be done manually or automatically. In the Internet age, there are also high requirements for automatic scaling capability. We can automatically scale based on custom metrics such as CPU usage and memory usage. Scaling up means being able to handle more requests and improve the system’s throughput. Scaling down means being able to save costs. Implementing scaling capability requires the use of load balancing and monitoring and alerting capabilities.

Both disaster recovery capability and scaling capability belong to high availability. In other words, at the deployment level, our architecture needs to have high availability.

Next, I will focus on the cloud native architecture design of the system resource layer and application layer, and briefly introduce the core functionalities of the application lifecycle management layer. After the architecture design is discussed, I will also introduce high availability architecture design for these layers.

Cloud Native Architecture Design for System Resource Layer #

Let’s start with the cloud native architecture design for the system resource layer. For a system, the architecture of system resources needs to be considered first. In the cloud native architecture, the current industry standard is to provide system resources (such as CPU, memory, etc.) through Docker and orchestrate Docker containers through Kubernetes. I have introduced the architecture of Docker and Kubernetes in Lesson 43. Here, I will mainly introduce the high availability architecture design at the system resource layer.

In the Docker+Kubernetes solution, high availability architecture is achieved through the high availability architecture of Kubernetes. To achieve high availability for the entire Kubernetes cluster, we need to implement the following two types of high availability:

  • High availability of the Kubernetes cluster.
  • High availability of applications deployed in the Kubernetes cluster.

Let’s look at these two high availability solutions separately.

Design of High Availability for the Kubernetes Cluster #

From what we learned in Lesson 43, we know that Kubernetes consists of eight core components: kube-apiserver, kube-controller-manager, kube-scheduler, cloud-controller-manager, etcd, kubelet, kube-proxy, and container runtime.

Among them, kube-apiserver, kube-controller-manager, kube-scheduler, cloud-controller-manager, and etcd are usually deployed on the master nodes, while kubelet, kube-proxy, and container runtime are deployed on the node nodes. To achieve high availability of the Kubernetes cluster, we need to implement high availability for these eight core components separately.

The high availability architecture diagram of the Kubernetes cluster is as follows:

In the solution shown above, all management nodes deploy components such as kube-apiserver, kube-controller-manager, kube-scheduler, and etcd. kube-apiserver communicates with the local etcd, and etcd synchronizes data among the three nodes. kube-controller-manager, kube-scheduler, and cloud-controller-manager communicate only with the local kube-apiserver, or access it through a load balancer.

In a Kubernetes cluster, there are multiple node nodes. When a node node fails, Kubernetes’ scheduler component, kube-controller-manager, will schedule the Pod to other nodes and rebuild the failed Pod on other available nodes. In other words, as long as there are more than two nodes in the cluster, the cluster can still provide services normally even if one of the nodes fails. In other words, the kubelet, kube-proxy, and container runtime components of the cluster can be single points, and high availability for these components is not required.

Next, let’s take a look at how the master nodes achieve high availability for each component. Let’s start with the high availability design of the kube-apiserver component.

Since kube-apiserver is a stateless service, it can be implemented by deploying multiple kube-apiserver instances and mounting a load balancer on top of them. All other components that need to access kube-apiserver access it through the load balancer to achieve high availability for kube-apiserver.

kube-controller-manager, cloud-controller-manager, and kube-scheduler are stateful services, so their high availability cannot be achieved through a load balancer. kube-controller-manager, kube-scheduler, and cloud-controller-manager use the --leader-elect=true parameter to enable distributed lock mechanism for leader election.

You can create multiple instances of kube-controller-manager, kube-scheduler, and cloud-controller-manager, and only one instance can obtain the lock at the same time to become the leader and provide services. If the current leader fails, other instances will automatically compete for the lock and become the new leader to continue providing services. In this way, we achieve high availability for the kube-controller-manager, kube-scheduler, and cloud-controller-manager components.

When kube-apiserver, kube-controller-manager, kube-scheduler, and cloud-controller-manager fail, we expect these components to automatically recover. In this case, these components can be deployed as Static Pods, so they can be automatically restarted when the Pods fail.

There are three approaches to achieve high availability for etcd:

  1. 使用 etcd Raft consensus 算法,通过部署多个 etcd 节点,并且保持节点间数据同步,实现 etcd 的高可用性。当一个 etcd 节点失败时,etcd 可以自动重选新的 Leader,保持系统的可用性。
  2. 使用 etcd 的双主架构,通过同时选举多个 etcd Leader 提供服务。当一个 Leader 失败时,其他 Leader 可以接管服务,实现 etcd 的高可用性。
  3. 使用 etcd 的集群,将数据复制到多个节点,并使用负载均衡方式访问,实现 etcd 的高可用性。

这些是实现 Kubernetes 集群高可用性的一些思路和方法,在实践中可以根据具体需求和场景进行选择和调整。通过这些高可用性设计,可以确保 Kubernetes 集群的稳定运行,提供持续可靠的服务。

  • Use an independent etcd cluster, and the independent etcd cluster comes with high availability capability.
  • On each Master node, use Static Pods to deploy etcd, and the etcd instances on multiple nodes synchronize data with each other. Each kube-apiserver only communicates with the etcd on its own Master node.
  • Use the self-hosted solution proposed by CoreOS to deploy the etcd cluster within the Kubernetes cluster, and achieve high availability of etcd through the fault tolerance capability of the Kubernetes cluster.

These three approaches need to be selected based on actual needs. In production environments, the second approach is used most frequently.

With this, we have achieved high availability for the entire Kubernetes cluster. Next, let’s take a look at how high availability is achieved for applications in the Kubernetes cluster.

High Availability of Kubernetes Applications #

Kubernetes has built-in capabilities for application high availability. In Kubernetes, applications run in the form of Pods. You can create applications through Deployment/StatefulSet, and specify the number of replicas and the health check method for Pods in the Deployment/StatefulSet. When a Pod health check fails, the controller (ReplicaSet) of the Deployment/StatefulSet will automatically terminate the faulty Pod and create a new one to replace it.

You might ask: how can we avoid requests from being scheduled to a faulty Pod, causing request failures when a Pod fails? Here, I will explain in detail as well.

In Kubernetes, we can access Pods through Kubernetes Services or load balancers. When accessing Pods through a load balancer, the RS (Real Server) instances in the load balancer’s backend are actually Pods. By creating multiple Pods, the load balancer can automatically balance the load based on the health status of Pods.

Next, let’s focus on this issue: how is high availability achieved when accessing Pods through a Kubernetes Service?

The principle of high availability is illustrated in the following diagram:

In Kubernetes, we can assign labels to each Pod. A label is a key-value pair, for example, label: app=Nginx. When we access a Service, the Service matches Pods with the same label based on the Label Selector it is configured with, and uses the endpoint addresses of these Pods as its backend RS.

For example, you can take a look at the image above: the Label Selector of the Service is Labels: app=Nginx, so it will select the 3 Pod instances we created with label: app=Nginx. At this point, the Service will select one Pod based on its load balancing strategy to forward the request traffic. When one of the Pods fails, Kubernetes automatically removes the endpoint of the faulty Pod from the backend RS list associated with the Service.

For ReplicaSets created by Deployment, when a Pod fails, it will also detect that one Pod is faulty, and the number of healthy Pod instances will be reduced to 2, which does not match the desired value of 3. Therefore, it will automatically create a new healthy Pod to replace the faulty Pod. Because the new Pod meets the Label Selector of the Service, the endpoint of the new Pod will be automatically added to the Service’s endpoint list by Kubernetes.

Through these operations, the IP address of the faulty Pod in the backend RS of the Service is replaced by the IP address of the new, healthy Pod. All the Pods accessed through the Service are now healthy. This way, high availability of applications is achieved through the Service.

From the above analysis, we can also see that Service is essentially a load balancer.

Kubernetes also provides a RollingUpdate mechanism to ensure that services are available during updates. The general principle of this mechanism is to create a new Pod during an update and then terminate an existing Pod, repeating this process until all Pods are updated. During an update, we can also control the number of Pods updated at a time and the minimum number of available Pods.

Next, let’s take a look at the cloud-native architectural design and high availability design at the application layer.

Cloud-Native Architecture Design at the Application Layer #

In cloud-native architecture, we use a microservices architecture to build applications. Therefore, I will mainly focus on the construction of microservices architecture. Let me first talk about my understanding of microservices.

Essentially, a microservice is a lightweight web service. However, in the context of microservices, we usually consider not just a single microservice, but an application composed of multiple microservices. In other words, an application consists of multiple microservices, and multiple microservices bring about some challenges that a single web service does not face, such as complex deployment, difficult troubleshooting, complex service dependencies, long communication links, and so on.

In the microservices context, besides developing individual microservices (lightweight web services), we need to focus more on solving the challenges brought about by application microservices. Therefore, in my opinion, microservices architecture design includes two important aspects:

  • The construction of individual microservices
  • Solving the challenges of application microservices

Microservice Implementation #

We can build microservices in two ways:

  • Using lightweight web frameworks like Gin, Echo, etc.
  • Using microservices frameworks such as go-chassis, go-micro, go-kit, etc.

To address the challenges brought about by application microservices, we need to employ various technologies and techniques, each of which can solve one or more challenges.

In conclusion, in my opinion, microservices are essentially lightweight web services that include a series of technologies and techniques to solve the challenges brought about by application microservices. The technology stack for microservices is shown in the following diagram:

Different technology stacks can be implemented in different ways and solve different problems:

  • Monitoring and alerting, logs, CI/CD, and distributed scheduling can be implemented using the capabilities provided by the Kubernetes platform.
  • Service gateway, authentication, load balancing, rate limiting/circuit breaking/fallback can be implemented using a gateway, such as Tyk gateway.
  • Inter-process communication, REST/RPC serialization can be implemented using web frameworks like Gin, Go Chassis, gRPC, and Spring Cloud.
  • Distributed tracing can be implemented using Jaeger.
  • Unified configuration management can be implemented using the Apollo configuration center.
  • Message queue can be implemented using NSQ, Kafka, RabbitMQ, etc.

There are three ways to implement service registration/service discovery:

  • Service registration/service discovery can be achieved through Kubernetes Service. Kubernetes comes with built-in service registration/service discovery capabilities. With this approach, we do not need additional development.
  • Service registration/service discovery can be implemented using a service registry. With this approach, we need to develop and deploy a service registry. Typically, etcd/consul/mgmet can be used as the service registry. Etcd is commonly used.
  • Service registration/service discovery can be done through a gateway. In this case, service information can be directly reported to the gateway service, or the service information can be reported to a service registry, such as etcd, which can then be obtained by the gateway from the service registry.

It is important to note that a native Kubernetes cluster does not support monitoring and alerting, logs, CI/CD, and other functionalities. When using a Kubernetes cluster, we usually use a Kubernetes platform based on Kubernetes development, such as Tencent Cloud Container Service TKE.

In the Kubernetes platform, secondary development is usually performed based on excellent open-source projects to implement monitoring and alerting, logs, CI/CD, and other functionalities.

  • Monitoring and alerting: Implemented based on Prometheus.
  • Logs: Implemented based on the EFK log solution.
  • CI/CD: Can be developed in-house or implemented based on excellent open-source projects, such as Drone.

Microservices Architecture Design #

I have introduced how to implement microservices. Here, let me explain in detail how the various components/functions mentioned above are organically combined to build a microservices application. The following is the architecture diagram of microservices:

In the above diagram, we deploy the microservices application layer in a Kubernetes cluster. On top of the Kubernetes cluster, we can build other functions required by microservices, such as monitoring and alerting, CI/CD, logging, and call chains. These functions collectively manage the lifecycle of the application.

We mount load balancing on top of microservices. Clients, such as mobile applications, web applications, API calls, etc., access microservices through load balancing.

When a microservice starts, it reports its endpoint information (usually in the format of ip:port) to the service registry. The microservice also periodically sends heartbeats to the service registry. In the service registry, we can monitor the status of microservices, remove unhealthy microservices, obtain access data between microservices, and so on. If we want to call microservices through a gateway or need the gateway to perform load balancing, we also need the gateway to obtain the endpoint information of microservices from the service registry.

High Availability Architecture Design for Microservices #

Let’s take a look at how to design the high availability capability of a microservices application.

We can deploy all microservice components in the form of Deployment/StatefulSet in a Kubernetes cluster, with the replica count set to at least two, using rolling updates as the update method, setting up service monitoring checks, and accessing services through Kubernetes Service or load balancing. In this way, without any modification, we can directly use the built-in fault tolerance capabilities of Kubernetes to achieve high availability of microservices architecture.

Overview of Cloud-Native Architecture #

In the previous section, I introduced the design of the system resource layer and the application layer in cloud-native architecture, but this does not constitute the entire cloud-native architecture design. Here, I will provide an overview of the design solution of cloud-native architecture with a diagram.

The cloud-native architecture depicted in the above diagram is divided into four layers. In addition to the previously mentioned system resource layer, application layer, and application lifecycle management layer, a unified access layer is added. Now, let me introduce the roles of these layers in the cloud-native architecture.

At the bottom is the system resource layer. In addition to providing traditional computing resources (physical machines, virtual machines), we can also provide containerized computing resources and highly available storage services. Containerized computing resources are built upon traditional physical machines/virtual machines.

In a cloud-native architecture, we should use containerized computing resources more, which isolate and provide computing resources externally through Docker container technology and orchestrate containers through Kubernetes. The combination of Docker + Kubernetes can build a very excellent system resource layer. This system resource layer comes with core capabilities that enterprise applications need: resource management, container scheduling, auto-scaling, network communication, service discovery, and health checks.

In the cloud-native era, these system resources, in addition to being containerized and lightweight, are increasingly moving towards serverless development. Serverless system resources are application-oriented, requiring no application or maintenance requests, offering on-demand billing, extreme scalability, and the ability to scale down to 0. Serverless system resources allow developers to focus solely on the development of application functionality at the application layer without wasting time on system-level operations and maintenance.

Above the system resource layer, we can then build the application layer. In cloud-native architecture, the common way to build applications is to use a microservices architecture. When developing a microservices application, we can use a microservices framework or not. The difference between the two is that the microservices framework completes the service governance-related functions for us, eliminating the need for us to develop these functions.

In my opinion, there are pros and cons to this. The advantage is, of course, a reduction in development workload. As for the downside, there are mainly two aspects: on one hand, the service governance functions integrated into the microservices framework may not necessarily be the most suitable for our solution in terms of implementation methods and approaches. On the other hand, using a microservices framework means that our application will be coupled with the microservices framework, and we cannot freely choose service governance technologies and approaches. Therefore, in actual development, you should choose the microservices construction approach according to your needs.

Generally speaking, a microservices framework integrates at least these service governance functions: configuration center, call chain tracking, logging system, monitoring, service registration/service discovery.

Moving further up, we realize the unified access layer. The unified access layer includes two components: load balancing and gateway. Load balancing serves as the only entry point for services, accessed by clients such as APIs, web browsers, and mobile terminals. Load balancing allows our application to automatically switch instances in the event of a failure and scale horizontally when the load is high. Below load balancing, there is also a gateway that provides some common capabilities, such as security policies, routing policies, traffic policies, statistical analysis, and API management.

Finally, we can build a series of application lifecycle management technologies, such as service orchestration, configuration management, logging, storage, auditing, monitoring and alerting, message queues, and distributed tracing. Some of these technologies can be integrated into our Kubernetes platform based on Kubernetes, while others can be built separately for all products to access.

Public Cloud Edition of Cloud Native Architecture #

As mentioned earlier, cloud native architecture involves many technology stacks. If a company has the capability, it can choose to develop its own; if it feels that it lacks manpower or the cost is too high, it can also use the cloud native infrastructure that public cloud vendors have already developed. There are obvious advantages to using the cloud native infrastructure developed by cloud vendors: these infrastructures are professional, stable, and require no development or maintenance.

To complete the picture of cloud native architecture design, here I will also introduce a public cloud edition of cloud native architecture design. So, what cloud native infrastructure does a public cloud vendor provide? Here I will introduce Tencent Cloud’s cloud native solutions. The panoramic view of the solution is shown in the following figure:

As you can see, Tencent Cloud provides a full stack of cloud native capabilities.

Based on the underlying cloud native capabilities, Tencent Cloud provides a series of cloud native solutions. These solutions are pre-designed architectures for building cloud native applications, and can help enterprises quickly implement cloud native architecture. Examples of these solutions include hybrid cloud solutions, AI solutions, and IoT solutions.

So, what cloud native capabilities does Tencent Cloud provide at the underlying level? Let’s take a look together.

At the application layer, through the TSF microservice platform, we can implement the construction and governance of microservices. In addition, more application construction architectures are provided, such as:

  • Serverless Framework, which can build serverless applications.
  • CloudBase, an integrated cloud native application development platform that enables rapid development of mini programs, web applications, and mobile applications.

At the system resource layer, Tencent Cloud provides various forms of computing resources. For example, TKE can be used to create native Kubernetes clusters; EKS can be used to create serverless Kubernetes clusters; TKE-Edge can be used to create Kubernetes clusters that can manage edge nodes. In addition, Tencent Cloud also provides the open-source container service platform TKEStack. TKEStack is an excellent container cloud platform that leads in code quality, stability, and platform functionality among open-source container cloud platforms. You are also welcome to give it a star.

At the application lifecycle management layer, cloud native etcd and Prometheus services are provided. In addition, Tencent Cloud provides the CLS log system for saving and querying application logs; cloud monitoring for monitoring your own applications; container image Service (TCR) for storing Docker images; CODING DevOps platform for supporting application CI/CD; and TDW for displaying the call chain of microservices.

At the unified access layer, Tencent Cloud provides a powerful API gateway. In addition, various serverless middleware services are provided, such as message queue TDMQ and cloud native database TDSQL.

All of these cloud native infrastructures have the common characteristic of being deployment-free and maintenance-free. In other words, at Tencent Cloud, you can focus only on using programming languages to write your business logic, and leave everything else to Tencent Cloud to handle.

Summary #

Cloud-native architecture design consists of four layers: system resource layer, application layer, unified access layer, and application lifecycle management layer.

In the system resource layer, Docker + Kubernetes can be used to provide computing resources. All our applications and services related to application lifecycle management can be deployed in a Kubernetes cluster, leveraging its core capabilities such as service discovery/registration, elastic scaling, and resource scheduling.

In the application layer, we can adopt a microservices architecture to build our applications. We can use lightweight web frameworks like Gin to construct the applications and then implement sidecar service governance functions, or we can use integrated microservices frameworks such as go-chassis and go-micro that include various service governance functionalities.

Because we adopt a microservices architecture, in order to maximize the reuse of some microservices functionalities such as authentication and authorization, rate limiting, etc., we provide a unified access layer. This layer can be built using technologies like API gateways, load balancers, service meshes, etc.

In the application lifecycle management layer, we can implement some cloud-native management platforms such as DevOps, monitoring and alerting, logging, configuration centers, etc. We can then make our applications access these platforms in a cloud-native manner and utilize their capabilities.

Lastly, I introduced Tencent Cloud’s cloud-native infrastructure. Through Tencent Cloud’s cloud-native capabilities, you can focus on writing your business logic using programming languages, while leaving the implementation of various cloud-native functionalities to the cloud provider.

After-class Exercises #

  1. Take a moment to think about the three implementation methods of service registration/service discovery. Which method is suitable for your project and why?
  2. Consider the important aspects that need to be taken into account when designing a cloud-native architecture. Feel free to share your thoughts in the comments section.

Feel free to leave a message and discuss with me. See you in the next class.