27 Rolling Updates How to Perform Smooth Application Upgrades and Downgrades

27 Rolling Updates How to Perform Smooth Application Upgrades and Downgrades #

Hello, I am Chrono.

In our last lesson, we learned how to manage stateful applications using StatefulSets, as well as how to manage stateless applications using Deployments and DaemonSets. With these tools, we can deploy applications of any form in Kubernetes.

However, simply deploying an application to the cluster is not enough. To ensure the stability and reliability of the application, continuous maintenance work is required.

If you remember from [Lesson 18], we learned about the “application scaling” feature of Deployments, which is a common maintenance operation. In Kubernetes, we can easily adjust the number of Pods under a Deployment using the kubectl scale command. Since StatefulSet is a special case of Deployment, it can also use kubectl scale to achieve “application scaling”.

Apart from “application scaling”, how should we handle other maintenance operations such as application updates and version rollbacks? These are also common issues we encounter in our day-to-day operations.

Today, I will use Deployments as an example to explain a more advanced operation in application management in Kubernetes: Rolling Updates. We will use kubectl rollout to achieve seamless application upgrades and downgrades that are transparent to the users.

How Kubernetes Defines Application Versions #

Everyone knows how application versioning works. For example, we release V1 and then a few days later add new features and release V2.

However, while it may sound simple, versioning can be quite challenging in practice. This is because the system is already running online and we must ensure uninterrupted service to external users, which can be compared to “replacing the engine of a plane in midair.” Especially in the past, it required coordination among various departments such as development, testing, operations, monitoring, and networking, which was time-consuming and labor-intensive.

However, application versioning can be systematically approached. With the powerful and automated operations management system Kubernetes, we can abstract the process and let computers handle the complex and tedious manual operations.

In Kubernetes, version updates are not made using API objects, but through two commands: kubectl apply and kubectl rollout. Of course, these commands need to be used in conjunction with YAML files for deploying applications such as Deployment and DaemonSet.

But before we confidently proceed with the operation, we must first understand what exactly is meant by “version” in Kubernetes.

We often simply think of “version” as the “version number” of the application or the “label” of the container image. However, let’s not forget that in Kubernetes, applications run in the form of Pods, which are typically managed by objects like Deployment. So when we talk about “version updates” in an application, we are actually updating the entire Pod.

So, what determines a Pod?

If you recall the many objects we created earlier, you’ll realize that a Pod is determined by a YAML description file, or more precisely, the template field in objects like Deployment.

Therefore, in Kubernetes, the version change in an application is reflected by the changes in the Pod within the template. Even if only one field in the template changes, it will form a new version, and that is considered a version change.

However, the template can be quite lengthy, and it’s not practical to use such a long string as a “version number.” Therefore, Kubernetes uses a “digest” function to calculate the Hash value of the template as a “version number.” Although it may not be easy to identify, it is very practical.

Let’s take the Nginx Deployment example from [Lecture 18] as an example. After creating the object, you can use kubectl get to check the status of the Pod:

Image

The random string in the Pod name, “6796…”, is the Hash value of the Pod template, which is also the “version number” of the Pod.

If you make changes to the Pod YAML description, such as changing the image to nginx:stable-alpine or changing the container name to nginx-test, a new application version will be generated, and when you use kubectl apply, the Pod will be recreated:

Image

As you can see, the Hash value in the Pod name has changed to “7c6c…”, indicating that the Pod has been updated to a new version.

How Kubernetes implements application updates #

To study the application update process in Kubernetes in more detail, let’s make some modifications to the Nginx Deployment object and see how Kubernetes achieves version updates.

First, modify the ConfigMap to output the version number of Nginx, making it easier for us to check the version using curl:

apiVersion: v1
kind: ConfigMap
metadata:
  name: ngx-conf
data:
  default.conf: |
    server {
      listen 80;
      location / {
        default_type text/plain;
        return 200
          'ver : $nginx_version\nsrv : $server_addr:$server_port\nhost: $hostname\n';
      }
    }    

Then, modify the Pod image to explicitly specify the version as 1.21-alpine and set the number of instances to 4:

apiVersion: apps/v1
kind: Deployment
metadata:
  name: ngx-dep
spec:
  replicas: 4
  ... ...
    containers:
    - image: nginx:1.21-alpine
  ... ...

Save it as ngx-v1.yml, and then execute the command kubectl apply to deploy this application:

kubectl apply -f ngx-v1.yml

We can also create a Service object for it and use kubectl port-forward to forward requests and check the status:

kubectl port-forward svc/ngx-svc 8080:80 &
curl 127.1:8080

Image

From the output of the curl command, we can see that the current version of the application is 1.21.6.

Now, let’s create a new version object ngx-v2.yml and upgrade the image to nginx:1.22-alpine, keeping everything else the same.

Because Kubernetes acts too quickly, in order to observe the application update process, we need to add a field minReadySeconds, so that Kubernetes can wait for a period of time during the update process to ensure that the Pod is ready before continuing the creation of the remaining Pods.

It should be noted that the minReadySeconds field does not belong to the Pod template, so it does not affect the Pod version:

apiVersion: apps/v1
kind: Deployment
metadata:
  name: ngx-dep
spec:
  minReadySeconds: 15      # The waiting time to confirm the Pod is ready
  replicas: 4
  ... ...
    containers:
    - image: nginx:1.22-alpine
  ... ...

Now let’s execute the command kubectl apply to update the application. Since the image name has changed and the Pod template has changed, it will trigger a “version update”. Then, use a new command, kubectl rollout status, to check the status of the application update:

kubectl apply -f ngx-v2.yml
kubectl rollout status deployment ngx-dep

Image

After the update is complete, if you execute kubectl get pod, you will see that all Pods have been replaced with the new version “d575…”. Accessing Nginx with curl will also show the output as “1.22.0”:

Image

If you carefully examine the output of kubectl rollout status, you will find that Kubernetes does not destroy all old Pods and create new Pods all at once. Instead, it creates new Pods one by one while also destroying old Pods, ensuring that there are always enough Pods running in the system without any “downtime” interrupting the service.

The process of increasing the number of new Pods is a bit like “rolling a snowball”, starting from zero and gradually getting bigger. This is known as a “rolling update”.

You can use the kubectl describe command to see the changes in the Pods in more detail:

kubectl describe deploy ngx-dep

Image

  • At the beginning, there were 4 V1 Pods (i.e., ngx-dep-54b865d75).
  • When the “rolling update” started, Kubernetes created 1 V2 Pod (i.e., ngx-dep-d575d5776) and reduced the number of V1 Pods to 3.
  • Then, the number of V2 Pods was increased to 2, while the number of V1 Pods became 1.
  • Finally, the number of V2 Pods reached the desired value of 4, and the number of V1 Pods became 0, completing the entire update process.

By now, you might have a clearer understanding. In fact, a “rolling update” is achieved by two synchronous “application scaling” operations controlled by Deployment. The old version is scaled down to 0, while the new version is scaled up to the specified value. It is a “one-for-one” process.

I have drawn a diagram of this rolling update process, which you can refer to for a better understanding:

Image

How Kubernetes Manages Application Updates #

The “rolling update” feature in Kubernetes is indeed very convenient. It allows you to upgrade your application to a new version without any manual intervention and without interrupting the service. However, what should you do if an error occurs during the update process or if you discover bugs after the update?

To address these two issues, we can still use the kubectl rollout command.

During the application update process, you can use the kubectl rollout pause command at any time to pause the update. This allows you to inspect, modify pods, or test and validate. If everything looks good, you can then use the kubectl rollout resume command to continue the update.

These two commands are quite simple, so I won’t go into detail. It’s worth noting that they only support Deployments and can’t be used with DaemonSets or StatefulSets (although the latest version 1.24 now supports rolling updates for StatefulSets).

For issues that arise after an update, Kubernetes provides a “safety net” in the form of update history. You can view the update history, which includes every previous update, and roll back to any point, similar to the version control software we commonly use in development, such as Git.

The command to view the update history is kubectl rollout history:

kubectl rollout history deploy ngx-dep

Image

It will output a list of versions. Since creating the Nginx Deployment counts as one version and updating it counts as another version, there will be two historical records here.

However, the information provided by kubectl rollout history is quite limited. You can add the --revision parameter to the command to view detailed information about each version, including tags, image names, environment variables, volumes, and so on. This can give you a rough understanding of which key fields have changed in each update:

kubectl rollout history deploy --revision=2

Image

Suppose we decide that the recently updated nginx:1.22-alpine is not good and we want to roll back to the previous version. We can use the kubectl rollout undo command, along with the --to-revision parameter to roll back to any historical version:

kubectl rollout undo deploy ngx-dep

Image

The process of kubectl rollout undo is actually the same as kubectl apply. It performs a “rolling update” using the old version of the pod template, scaling the new version of pods down to 0 and simultaneously scaling up the old version of pods to the specified value.

I’ve also created a diagram that shows the process of “version downgrade” from V2 to V1. It is exactly the same as the “version upgrade” process from V1 to V2, with the only difference being the change in version numbers:

Image

How to Add Update Descriptions in Kubernetes #

So far, you have learned about the update functionality in Kubernetes.

However, have you ever felt that the version list displayed by kubectl rollout history is too simple? It only shows a version update number, and the CHANGE-CAUSE column always displays <none>. Can we add descriptive information for each update, just like in Git?

Image

Of course, this is possible, and the solution is quite simple. We just need to add a new field called annotations in the metadata section of the Deployment.

The annotations field is used to add additional information to the Kubernetes objects. It is similar in structure to labels, as both are in the form of key-value pairs. However, they have different purposes:

  • Information added to annotations is generally used by various internal Kubernetes components, acting like “extension properties”;
  • labels are mainly used by external users of Kubernetes to filter and select objects.

To explain it using a simple analogy, annotations is like the product manual inside the packaging box, while labels are like the labels/stickers on the outside of the box.

With the help of annotations, Kubernetes can add arbitrary additional information to API objects without modifying their structure or adding new fields. This follows the Open-Closed Principle (OCP) in object-oriented design, making objects more extensible and flexible.

You can write any value in annotations, and Kubernetes will automatically ignore any key-value pairs it doesn’t understand. However, to write update descriptions, you need to use a specific field called kubernetes.io/change-cause.

Let’s perform an operation to create three versions of the Nginx application while adding update descriptions:

apiVersion: apps/v1
kind: Deployment
metadata:
  name: ngx-dep
  annotations:
    kubernetes.io/change-cause: v1, ngx=1.21
... ...

apiVersion: apps/v1
kind: Deployment
metadata:
  name: ngx-dep
  annotations:
    kubernetes.io/change-cause: update to v2, ngx=1.22
... ...

apiVersion: apps/v1
kind: Deployment
metadata:
  name: ngx-dep
  annotations:
    kubernetes.io/change-cause: update to v3, change name
... ...

You need to pay attention to the metadata section in the YAML. It uses annotations.kubernetes.io/change-cause to describe the version update. This is much easier to understand compared to the extensive information displayed by kubectl rollout history --revision.

After creating and updating the objects using kubectl apply, let’s check the update history using kubectl rollout history:

Image

This time, the displayed list is much more appealing. The main changes for each version are clearly listed, giving a similar feeling to Git version control.

Summary #

Alright, today we learned about the advanced application management features in Kubernetes: Rolling Updates. It automatically scales the number of new and old Pods and achieves service upgrades or downgrades without user awareness, making complex and tricky operations simple and easy.

Let’s summarize today’s key points:

  1. In Kubernetes, the application version is not just the container image but the entire Pod template. To facilitate processing, a hash value is calculated using a digest algorithm as the version number.
  2. Kubernetes uses the rolling update strategy to update applications. It reduces the number of old Pods while increasing the number of new Pods to ensure the service is always available during the update process.
  3. The kubectl rollout command is used to manage application updates. Subcommands include status, history, undo, and others.
  4. Kubernetes keeps a record of application update history. You can use history --revision to view detailed information about each version, and you can also add the annotation kubernetes.io/change-cause for each update.

In addition, there are other fields in the Deployment that can provide more detailed control over the rolling update process. They are located in spec.strategy.rollingUpdate, such as maxSurge and maxUnavailable, which respectively control the maximum number of new Pods and the maximum number of unavailable Pods. The default values are usually sufficient, but if you’re interested, you can further explore them in the Kubernetes documentation.

Homework #

Finally, it’s time for the homework. I have two questions for you to think about:

  1. What are the similarities and differences between Kubernetes’ “rolling update” and the commonly mentioned “gray release”?
  2. Deploying an old version of YAML directly can also achieve version rollback. What are the advantages of the kubectl rollout undo command?

Feel free to participate in the discussion in the comments section. If you find today’s content helpful, please feel free to share it with your friends for further discussion. See you in the next class.

Image