14 Application Gateway Open Resty Interface With Kubernetes ( K8s) Practice

14 Application Gateway OpenResty Interface with Kubernetes (K8s) Practice #

There are many choices for cloud-native application gateways, such as Nginx/OpenResty, Traefik, Envoy, etc. In terms of deployment popularity, OpenResty is undoubtedly the most popular reverse proxy gateway. This article explores how Kubernetes uses the Ingress object to unify the external entry gateway and leverages OpenResty to optimize the capabilities of the entry gateway.

Why OpenResty is needed #

The native Kubernetes Service provides the ability to expose services externally by accessing the Pod business workload container group through a unique ClusterIP and providing software load balancing for traffic. The disadvantage is that the ClusterIP address of the Service can only be accessed within the cluster. If you need to provide external access to this Service for users outside the cluster, Kubernetes needs to use two other methods to achieve this. One is NodePort, and the other is LoadBalancer.

nodeport

When the container application uses the NodePort method to expose the Service and allow external users to access it, the following difficulties arise:

  • NodePort needs to be included when accessing the service externally
  • The NodePort port changes every time the service is deployed

loadbalancer

When the container application uses the LoadBalancer method, the main application scenario is to interface with cloud providers to provide load balancing. Of course, cloud providers provide corresponding load balancing plugins for easy integration with Kubernetes.

For most scenarios, we still need to use private entry application gateways to provide service exposure externally. At this time, external traffic is blocked by exposing the Layer 7 web ports. At the same time, for users, the existence of NodePort is shielded, and users do not need to worry about the occupation of NodePort ports when deploying applications frequently.

Previously, the ingress controller introduced by Kubernetes used Nginx as the engine, which has some prominent issues in use:

Reload issue #

Ingress in Kubernetes was designed to hand over the YAML configuration file to the Ingress Controller for processing, convert it into nginx.conf, and then trigger a reload of nginx.conf to make the configuration take effect. In daily operations and maintenance, occasionally modifying the Ingress YAML configuration will trigger a reload every time the configuration takes effect. This is unacceptable, especially when long connections are used for incoming traffic, it is more likely to cause accidents.

Weak extension capabilities #

Although Ingress was originally designed to solve the entry gateway, the business’s demand for the entry gateway is no less than that of the internal gateway. The demand for business-level grayscale control, circuit breaking, traffic control, authentication, traffic control, and other requirements to be implemented on Ingress is higher. However, the extension provided by the native Ingress is limited.

ingress

To solve the inherent problems of Nginx mentioned above, it is obviously the best replacement solution to use OpenResty as an extended solution based on Nginx + Lua. The community has already completed the replacement of the Nginx core component of Ingress with OpenResty. (Note: https://github.com/kubernetes/ingress-nginx/pull/4220)

Rediscovering the NGINX Ingress Controller #

nginx-ingress-arch

In general, Kubernetes controllers use a synchronous looping pattern to check if the required state in the controller has been updated or needs modification. To do this, we need to establish a model using different objects in the cluster, especially Ingresses, Services, Endpoints, Secrets, and Configmaps, to generate the current configuration file that reflects the cluster’s state.

To obtain this object from the cluster, we use Kubernetes Informers, especially FilteredSharedInformer. This informer allows us to react to individual changes using callbacks when new objects are added, modified, or deleted. Unfortunately, we cannot know if a particular change will affect the final configuration file. Therefore, each time there is a change, we need to rebuild a new model from the state of the cluster and compare it with the current model. If the new model is equal to the current model, we avoid generating a new Nginx configuration and triggering a reload. Otherwise, we check if there is only a difference regarding Endpoints. If so, we use an HTTP POST request to send the new Endpoints list to the Lua processor running inside Nginx and once again avoid generating a new Nginx configuration and triggering a reload. If there are differences between the running model and the new model beyond Endpoints, we create a new Nginx configuration based on the new model, replace the current model, and trigger a reload.

To avoid process reload, we still need to be clear about the following cases that will trigger a reload:

  • Creating a new entry resource
  • Adding the TLS section to an existing Ingress
  • Changes in Ingress annotations that not only affect upstream configurations (e.g., load balancing annotations do not require reloading)
  • Adding/removing a path from the Ingress
  • When objects such as Ingress, Service, and Secret are deleted
  • Some referenced objects missing from Ingress are available, such as Service or Secret
    • Key configuration update

In addition, because Lua is used, we also need to understand how to add Lua plugins to the Nginx Ingress Controller. For example, let’s see how to add and activate a plugin using an example:

Refer to https://github.com/ElvinEfendi/ingress-nginx-openidc and add the Openidc Lua plugin.

  • Add the Lua plugin to rootfs/etc/nginx/lua/plugins/openidc/main.lua
  • Build your own Ingress image: docker build -t ingress-nginx-openidc rootfs/

Sample Dockerfile:

FROM quay.io/kubernetes-ingress-controller/nginx-ingress-controller:0.26.1

USER root

RUN luarocks install lua-resty-openidc

USER www-data

COPY etc/nginx/template/nginx.tmpl /etc/nginx/template
COPY etc/nginx/lua/plugins/openidc /etc/nginx/lua/plugins/openidc
  • Update the Nginx configuration to activate the Lua plugin /etc/nginx/template/nginx.tmpl, add plugins.init({ "openidc" })
  • Deploy your custom Ingress image to the cluster to provide the corresponding plugin capabilities.

Zero-Downtime Production Deployment #

The official Nginx Ingress Controller is deployed as a container on machines. When the configuration needs to be upgraded, the Ingress Pod still needs to restart to apply the update, causing a momentary interruption in network traffic. In a production environment, this is not allowed. We need the Nginx Ingress Controller to continuously run and handle traffic.

When a Pod process is terminated, Kubernetes sends a SIGTERM signal to the container’s main process. By default, it waits for 30 seconds and then sends a SIGKILL signal to immediately terminate the container process. Kubernetes starts container processes that can gracefully handle the SIGTERM signal and shut down properly, but not every container process can handle it, such as Nginx.

Nginx has different supported signals:

     Nginx Signals
    +-----------+--------------------+
    |   signal  |      response      |
    +-----------+--------------------+
    | TERM, INT | fast shutdown      |
    | QUIT      | graceful shutdown  |
    +-----------+--------------------+

So, if we don’t do any preprocessing, when Kubernetes sends a GIGTERM, Nginx will immediately terminate the process. If Nginx is processing traffic at this time, users will experience a brief HTTP 503 error. In order to gracefully shut down the Nginx process, we need to find a way to send a SIGQUIT signal to the Nginx process in advance. The solution is to use the preStop Hook of the Pod object to perform the SIGQUIT sending operation in advance. The following script can send SIGQUIT:

/usr/local/openresty/nginx/sbin/nginx -c /etc/nginx/nginx.conf -s quit
while pgrep -x nginx; do 
  sleep 1
done

We can put the above script into a single-line command and add it to the lifecycle section of the Pod specification like this:

lifecycle:
  preStop:
    exec:
      command: ["/bin/sh", "-c", "sleep 5; /usr/local/openresty/nginx/sbin/nginx -c /etc/nginx/nginx.conf -s quit; while pgrep -x nginx; do sleep 1; done"]

Please note that there is a sleep 5 command before the actual script. This will wait for any processing conditions related to Kubernetes to pass before starting the graceful shutdown. In the testing process, if this sleep is not executed, Nginx will still interrupt the connection.

Additionally, because the default wait time for graceful shutdown is 5 seconds, if you need more time, you can add the following configuration:

spec:
  terminationGracePeriodSeconds: 600

With this, the issue of graceful shutdown is perfectly resolved. However, with this configuration, we still cannot ensure uninterrupted traffic upgrades. In order to achieve continuous deployment of business, we generally deploy two sets of Ingress redundantly to completely solve the impact of restarting the business. The operation steps are as follows:

helm upgrade --install nginx-ingress stable/nginx-ingress --namespace ingress -f nginx/values.yaml

helm upgrade --install nginx-ingress-temp stable/nginx-ingress --namespace ingress-temp -f nginx/values.yaml

By changing the DNS to introduce traffic to ingress-temp, then observe the traffic introduction:

kubectl logs -lcomponent=controller -ningress -f

kubectl logs -lcomponent=controller -ningress-temp -f

Update the old Ingress Controller and add the following configuration in the Nginx values.yaml:

controller:
  lifecycle:
    preStop:
      exec:
        command: ["/bin/sh", "-c", "sleep 5; /usr/local/openresty/nginx/sbin/nginx -c /etc/nginx/nginx.conf -s quit; while pgrep -x nginx; do sleep 1; done"]
  terminationGracePeriodSeconds: 600

Publish the new Nginx Ingress Controller:

helm upgrade --install nginx-ingress stable/nginx-ingress --namespace ingress --version 1.6.16 -f nginx/values.yaml

Update the DNS to restore the traffic to the old Ingress. Clean up the temporary Ingress Controller:

helm delete --purge nginx-ingress-temp --namespace ingress-temp

kubectl delete namespace ingress-temp

Creating Custom Annotations for the Kubernetes ingress-nginx Controller #

The difference between the cloud-native Ingress Controller and Nginx’s configuration is that the former uses a lot of Annotation tags to define some reusable configuration options. We need to understand more about its implementation principles and be able to flexibly apply them, which will greatly help our business operations.

The process of adding a custom Annotation is as follows:

  • Clone the official Ingress repo and create a custom annotation directory in the internal/ingress/annotations directory. Add main.go and write the Annotation business logic.
  • Add this new annotation variable in the internal/ingress/annotations/annotations.go file.
  • Declare the structure of the annotation object in types.go. Afterwards, in controller.go, you must ensure that the service object is populated with the values from the annotation. This file contains the logic for handling an ingress object and transforming it into an object that can be loaded into the Nginx configuration.
  • Add the expansion variables of the annotation structure in nginx.tmpl to facilitate the generation of the final Nginx configuration template.

You can refer to this complete example for more information:

https://github.com/diazjf/ingress/commit/2e6ac94dd79e80e1b2d788115647f44a526afcfd

Summary of Experience #

The Ingress object is the standard object for Kubernetes to introduce traffic. Please be aware that it is advisable to use Ingress objects to isolate traffic groups within the enterprise. Because Ingress is naturally integrated with APIServer monitoring, it can dynamically expose service capabilities to the outside world. There is no need for secondary development or customization of our own gateway access solution. Directly using the Ingress gateway can meet the requirements, and where the requirements cannot be met, the Lua plugin mechanism provided by OpenResty can complement it well. In addition to the Nginx ingress controller provided by the official, there are also open-source vendors in China who provide OpenResty gateways with more built-in plugins, such as Apache APISIX Ingress (https://github.com/api7/ingress-controller). With the introduction above, you can quickly apply it.

References: