04 Microservices Application Scenario Analysis of Practical Difficulties in Kubernetes ( K8s)

04 Microservices Application Scenario Analysis of Practical Difficulties in Kubernetes (K8s) #

In recent years, there have been subtle changes in the architecture of enterprise application development. According to Conway’s Law, changes in the structure of the enterprise organization have led to the popularity of microservice architectures in the enterprise application development process. Microservices have become one of the common architecture upgrade solutions in the technology selection of technical teams during the digital transformation process of enterprises in recent years. In this context, DevOps teams urgently need to use a unified infrastructure to maintain the entire lifecycle of microservice applications in order to cope with the changes in the enterprise architecture. This brings us new challenges - how to smoothly and quickly implement Kubernetes cluster system in microservice application scenarios.

Deployment Issues with Microservice Registry in Kubernetes #

Classic microservice systems are based on a registry as the core, where clients register with the registry server through the CS model, enabling other microservice components to discover and invoke each other. When we introduce Kubernetes, it provides DNS-based service discovery and a pod-level network, which directly breaks the single-layer structure of the original physical network, making it impossible for traditional microservice applications and microservice applications in Kubernetes clusters to directly interconnect. To solve this problem, many technical teams will adopt the following two approaches to overcome this challenge.

Creating a Layer 2 network to connect Pods with the physical network #

The main purpose of this approach is to not change the existing network structure and make the Kubernetes network adapt to the classic network. Each Pod is allocated a manageable IP network segment. Common methods include macvlan, Calico BGP, Contiv, etc. This approach directly breaks the architectural philosophy of Kubernetes, making Kubernetes a resource pool for running Pods, and more advanced features such as Service, Ingress, and DNS cannot be used in this scenario. With the iteration of Kubernetes versions, this feature-deprived Kubernetes architecture becomes less and less useful.

Deploying the registry to the Kubernetes cluster and using IP registration for external services #

This approach is currently the most popular and allows for a network deployment structure that works with legacy systems. By using StatefulSet and Headless Service, we can easily build an AP-type registry cluster. When the client connects to the server, we can use the domain name within Kubernetes. For example:

eureka:
  client:
    serviceUrl:
      defaultZone: http://eureka-0.eureka.default.svc.cluster.local:8761/eureka,http://eureka-1.eureka.default.svc.cluster.local:8761/eureka,http://eureka-2.eureka.default.svc.cluster.local:8761/eureka

For microservices outside the cluster, we can directly use the NodeIP of the Service endpoint. For example:

eureka:
  client:
    serviceUrl:
      defaultZone: http://<node-ip>:30030/eureka

Everyone will definitely come to the conclusion through the above two solutions, that deploying in conjunction with traditional networks is not ideal no matter how it is done. How do we solve this dilemma?

When we review the initial design of Kubernetes, we will find that it is designed as the application infrastructure for data centers and does not have an architecture that is compatible with traditional networks. This is why we feel that no matter how we operate, the deployment is not right. However, the internal logic of enterprise business is complex, and technical teams generally migrate the business systems to new cloud-native clusters carefully. Therefore, we are bound to encounter such hybrid architecture scenarios again. At this time, we can learn from the industry’s practiced design of modularization, divide the application into units based on the gateway as the boundary, and put a complete set of microservices on Kubernetes. In this way, the communication between the Kubernetes cluster and external services can be performed using RPC/HTTP API, avoiding breaking the Kubernetes cloud-native system.

Optimizing the core capabilities of microservices #

In the classic microservice architecture, we often encounter function calls between services, such as Spring Feign/Dubbo RPC/gRPC ProtoBuf, etc. To know the location of a service, we must have a registry center to obtain the IP calling relationship of the other party. Then, when combined with the implementation of CoreDNS in the Kubernetes cluster, you will naturally think of a problem. If all service components are under one Namespace, their relationship can be written directly in the configuration file by name. CoreDNS helps us achieve service discovery under the cluster. In other words, when you deploy microservices to Kubernetes, services like Eureka are basically redundant.

Many experienced architects will say that while the logic is like this, microservice systems like Spring Cloud are centered around Eureka and may not be able to operate without it. I think this problem is a historical legacy issue. Based on the Spring Boot framework, we can completely build a microservice system based on the service discovery capabilities of Kubernetes.

In addition, because the design of Kubernetes Pods includes the SideCar model, we can place common microservices concerns such as rate limiting, circuit breaking, secure mTLS, and canary releases in an independent Sidecar proxy container, which proxies all these common container governance requirements. This can greatly free the mental model of developers, allowing them to focus on writing business code. As everyone has seen, isn’t this a service mesh? Yes, it has indeed integrated into the design pattern of the service mesh, but the current reference service mesh like Istio has not been widely used in the industry. Therefore, we still need to use existing microservice frameworks to self-build such a system.

In addition, the business observability capability of the microservice system can be achieved through the Kubernetes ecosystem. We can use ELK to collect business logs, use Prometheus for monitoring, and use Grafana to build visualized business models, further improving the capability design system of microservices.

Deployment strategies for microservice applications #

When deploying Kubernetes clusters, many microservice applications often use the Deployment object. In fact, Kubernetes also provides the StatefulSet object. From the name of this Workload object, developers usually think that it is for stateful applications that require persistent volumes. However, the StatefulSet also provides a powerful rolling update strategy. Since the StatefulSet provides a unique numbered name for each Pod, when updating, containers can be updated one by one according to the business requirements. This is actually very important for business systems. I even think that the container service of microservices should use StatefulSet instead of Deployment. Deployment is more suitable for stateless applications like Node.js and Nginx.

img

In addition, deploying microservices on Kubernetes does not mean that your microservices are highly available. You still need to provide affinity/anti-affinity strategies to allow Kubernetes to schedule applications on different hosts, so that the business containers can be distributed reasonably, so that when a downtime occurs, it will not directly cause a “bloodbath” phenomenon and directly affect your business system. Example:

affinity:
  podAntiAffinity:
    requiredDuringSchedulingIgnoredDuringExecution:
    - weight: 100
      labelSelector:
        matchExpressions:
        - key: k8s-app
          operator: In
values:
- kube-dns
topologyKey: kubernetes.io/hostname

During the update process of microservice applications, updating endpoints and forwarding rules is necessary. This has always been a performance bottleneck in Kubernetes clusters. We can use readinessProbe and preStop to smoothly upgrade the business. Here is an example:

apiVersion: extensions/v1beta1
kind: Deployment
metadata:
  name: nginx
spec:
  replicas: 1
  selector:
    matchLabels:
      component: nginx
  template:
    metadata:
      labels:
        component: nginx
    spec:
      containers:
      - name: nginx
        image: "nginx"
        ports:
        - name: http
          hostPort: 80
          containerPort: 80
          protocol: TCP
        readinessProbe:
          httpGet:
            path: /healthz
            port: 80
            httpHeaders:
            - name: X-Custom-Header
              value: Awesome
          initialDelaySeconds: 15
          timeoutSeconds: 1
        lifecycle:
          preStop:
            exec:
              command: ["/bin/bash", "-c", "sleep 30"]

Finally, the operations team needs to handle more and more microservice containers, so we can add PodDisruptionBudget constraints to the application containers to ensure that existing business is not affected during the operation process. Here is an example:

apiVersion: policy/v1beta1
kind: PodDisruptionBudget
metadata:
  name: zk-pdb
spec:
  minAvailable: 2
  selector:
    matchLabels:
      app: zookeeper

Summary #

In Kubernetes, there is a Service object, which can be translated into 服务对象 in Chinese, but it is easy to confuse it with the service in microservices. In fact, they have nothing to do with each other. We should not directly apply microservice architecture to container clusters. The more scientific approach is to understand and coordinate the characteristics of both microservices and Kubernetes systems. When designing our own microservice architecture, we should think in detail about the business requirements in the CI/CD process and the deployment process, and then analyze which solution is more suitable for our team. Only in this way can we completely get rid of historical burdens and allow Kubernetes technology to play its role in the appropriate position.