15 Serverless Kubernetes Applications Deployment and Scaling

15 Serverless Kubernetes Applications Deployment and Scaling #

Introduction: This article is divided into three parts. First, we will demonstrate the creation of a Serverless Kubernetes cluster and the deployment of business applications. Then, we will introduce the common features of Serverless Kubernetes. Finally, we will discuss how to scale applications.

Cluster Creation and Application Deployment #

1. Cluster Creation #

After fully understanding the basic concepts of Serverless Kubernetes, we can directly enter the Container Service console (https://cs.console.aliyun.com/#/authorize) to create a cluster.

image.png

On the creation page, there are three main attributes to select or fill in:

  • Region and Kubernetes version for cluster creation;
  • Network properties: choose to automatically create Container Service or specify an existing VPC resource;
  • Cluster capabilities and services: choose as needed.

After completing the attributes, click “Create Cluster”. The entire creation process takes 1-2 minutes.

image.png

2. Application Deployment #

After the cluster is created, we can deploy a stateless nginx application in three steps:

  • Application basic information: name, number of PODs, labels, etc.;
  • Container configuration: image, required resources, container port, data volume, etc.;
  • Advanced configuration: services, routing, HPA, POD labels, etc.

image.png

After creation, the routing will display the access method to expose the service to the outside world.

image.png

As shown in the above figure, bind ask-demo.com in the local host to the resolution of the routing endpoint 123.57.252.131, and then access the domain name in the browser to request the deployed nginx application.

Introduction to Common Features #

We usually use two methods, the Container Service console and Kubectl, to use the common features of Serverless Kubernetes.

1. Container Service Console #

image.png

On the Container Service console, we can perform the following whitelabel operations:

  • Basic information: Cluster ID and running status, API Server endpoint, VPC and security, viewing and operating cluster access credentials;
  • Storage volume: Viewing and operating PV, PVC, StorageClass;
  • Namespace: Viewing and operating cluster namespace;
  • Workload: Viewing and operating Deployment, StatefulSet, Job, CronJob, Pod;
  • Service: Viewing and operating the Service provided by the workload;
  • Routing: Viewing and operating Ingress, used for routing Services;
  • Release: Viewing and operating tasks based on Helm or Container Service for phased release;
  • Configuration management: Viewing and operating ConfigMap and Secret;
  • Operation and maintenance management: Event list of the cluster and operation audits.

2. Kubectl #

In addition to using the console, we can also use Kubectl for cluster operations and management.

image.png

We can use Kubectl in the cloud through CloudShell or install Kubectl locally. Then, we can use Serverless Kubernetes by writing the cluster’s access credentials into kubeconfig.

Application Scaling #

Based on the above explanation, we have already understood how to deploy applications and perform common operations on clusters. Now, let’s learn how to scale applications.

Common application scaling methods in Serverless Kubernetes include:

  • Manual scaling: The most original method, sacrificing certain degree of cost and application stability;
  • HPA (Horizontal Pod Autoscaler): Elastic scaling based on metrics such as CPU and Memory, suitable for applications with burst traffic scenarios;
  • Cron HPA: Scheduled scaling based on Cron expressions, suitable for applications with fixed peaks and valleys;
  • External Metrics (alibaba-cloud-metrics-adapter): Alibaba Cloud metrics container horizontal scaling, supporting more data metrics in addition to native HPA.

Conclusion #

The above is the complete sharing on Serverless Kubernetes application deployment and scaling. We hope that this sharing can help everyone better understand and use Serverless Kubernetes. We will also share more practical cases of Serverless Kubernetes in the future.