29 Sae Extreme Application Deployment Efficiency

29 SAE Extreme Application Deployment Efficiency #

As a Serverless platform, SAE provides fully managed application services, making full use of cloud-native technology advantages. It uses containers as application carriers and provides agile deployment, orchestration, and elasticity capabilities. SAE abstracts the underlying infrastructure, so for users, the most bottom-level resource they perceive is the application instance itself, and the creation and deployment of applications are the main interfaces for user interaction.

Next, we will introduce the efficiency optimization work we have done in the process of application creation, deployment, and restart.

Application Creation #

First, let’s talk about application creation. Currently, the user interface allows deploying applications through images or war/jar installation packages. Eventually, on the platform side, these are packaged into container images and distributed, and the platform then applies for IAAS resources such as computation, storage, and network, and starts creating container execution environments and application instances.

image.png

In this process, there are steps involving scheduling, cloud resource creation and mounting, image pulling, container environment creation, and application process creation. The efficiency of application creation is closely related to these steps.

Naturally, we can think about whether some of these processes can be parallelized to reduce the overall creation time. After analyzing the time-consuming steps, we identified some bottlenecks, and some of the execution steps are decoupled and independent of each other. For example, the creation and mounting of elastic cloud network cards and the pulling of application images are independent processes. Based on this, we parallelized the independent processes, reducing the time required for application creation without affecting the creation process.

Application Deployment #

Application deployment refers to application upgrade. We know that the traditional application deployment process can be divided into the following steps:

  1. First, create a new version instance.
  2. Then, wait for the instance to start and the business process to be ready, and then route traffic to the instance by creating the corresponding SLB backend.
  3. Finally, remove and destroy the old version instance from the SLB backend.

In the scenario of phased release, this process continues to cycle for the next batch of instances for rolling upgrades. In this process, the application instances are rebuilt, and the instance IP address will also change.

As mentioned earlier, the creation of application instances includes scheduling, cloud resource creation and mounting, image pulling, container environment creation, and application process launching. For application deployment, it is unnecessary to go through all these processes again because we only need to create new application execution environments and processes based on the new image.

Therefore, we have implemented the capability of in-place deployment. During rolling upgrade, we keep the original application instances and their mounted cloud network and cloud storage resources, and only update the execution environments of the instances. This way, the original deployment process is simplified to:

Remove traffic, remove the running instances from the SLB backend -> Upgrade the instances in place -> Route traffic to the upgraded instances

After the in-place upgrade, the application instances still retain their original IP addresses. Through testing, we found that for a 2-instance application, the deployment efficiency is improved by 4 times, reducing the deployment time from nearly 1 minute to a dozen seconds.

image.png

Application Restart #

Finally, let me briefly introduce the upcoming capability of in-place restart.

Restarting an instance is a necessary operation in certain operation and maintenance scenarios. When it comes to application restart, we hope to be able to perform a single reboot similar to Linux systems, instead of rebuilding the instances. The specific approach is that, in the container environment, we can easily perform a start-stop operation through the container engine API. In-place restart, compared to in-place upgrade, eliminates the need for image updates and execution environment establishment. Moreover, compared to ECS, container restart is lighter, achieving a time scale in seconds.