Bonus Discussion on Kong Ingress Controller

Bonus Discussion on Kong Ingress Controller #

Hello, I am Chrono.

It has been over three months since the course ended. Do you remember what I said at the end of the course? “It is not just an end, but also a new beginning.” The completion of the course does not mean that we will stop exploring Kubernetes. On the contrary, whether it’s you or me, we will continue on this learning journey.

When the course started, I had planned a lot of content. However, the field of Kubernetes is vast, and coupled with my busy daily work, I have limited time and energy. As a result, some of the planned knowledge points have not been covered, which is a pity. I have always wanted to find an opportunity to make up for that.

These days, I have a bit more free time for development tasks, so I returned to the column and decided to talk about another popular tool: Kong Ingress Controller, and discuss the importance of Ingress in managing Kubernetes clusters.

Introducing Kong Ingress Controller #

Let’s quickly review our knowledge about Ingress ([Lesson 21]).

Ingress is similar to Service and is based on the HTTP/HTTPS protocol. It is a collection of layer 7 load balancing rules. However, Ingress itself does not have management capabilities. It must rely on an Ingress Controller to control the incoming and outgoing traffic of the Kubernetes cluster.

Therefore, based on the definition of Ingress, various implementations of Ingress Controller have emerged.

We have already seen the Nginx Ingress Controller developed by the Nginx official team. However, it is limited by the capabilities of Nginx itself. When Ingress, Service, and other objects are updated, the static configuration file must be modified, and the process needs to be restarted (reloaded). This can cause some problems in a microservices system with frequent changes.

Today, we will talk about the Kong Ingress Controller. It stands on the shoulders of the giant Nginx and is based on OpenResty and an embedded LuaJIT environment. It achieves fully dynamic routing changes, eliminating the cost of reloads and making the operation more stable. Moreover, it has many additional enhancement features, making it very suitable for users who have higher and more detailed management requirements for the traffic of the Kubernetes cluster (Image source from Kong’s official website).

Image

[Lesson 21]: [link to lesson 21, if available]

Installing Kong Ingress Controller #

Next, let’s take a look at how to introduce the Kong Ingress Controller into a Kubernetes cluster.

For simplicity, this time I chose the minikube environment, with version 1.25.2, corresponding to the previous Kubernetes version 1.23.3:

Image

Currently, the latest version of Kong Ingress Controller is 2.7.0. You can directly obtain its source code package from GitHub (https://github.com/Kong/kubernetes-ingress-controller):

wget https://github.com/Kong/kubernetes-ingress-controller/archive/refs/tags/v2.7.0.tar.gz

The YAML files required for installing Kong Ingress Controller are stored in the “deploy” directory after decompression. It provides two deployment methods: “with-database” and “database-less”. Here, I chose the simplest “database-less” method, which only requires one all-in-one-dbless.yaml to complete the deployment. Execute the following kubectl apply command:

kubectl apply -f all-in-one-dbless.yaml

Image

We can also compare the installation methods of the two Ingress Controllers. The Nginx Ingress Controller is composed of multiple scattered YAML files, which require executing multiple kubectl apply commands in order, which is a bit cumbersome. On the other hand, Kong Ingress Controller consolidates objects such as Namespace, RBAC, Secret, and CRD into one file, making installation very convenient and error-free in terms of forgetting to create resources.

After installation, Kong Ingress Controller will create a new namespace “kong”, with a default Ingress Controller and its corresponding Service. You can use kubectl get to view them:

kubectl get ns,ingresses,svc -n kong

Image

Looking at this screenshot, you may notice that the “READY” column displayed in kubectl get pod shows “2/2”, indicating that this Pod contains two containers.

This is also a clear difference in architectural implementation between Kong Ingress Controller and Nginx Ingress Controller.

Kong Ingress Controller uses two containers in the Pod, one for the management process Controller and the other for the proxy process Proxy. These two containers communicate using the loopback address. On the other hand, Nginx Ingress Controller requires both the management process and the proxy process to be in the same container (depicted in the image representing the Kong architecture design) because it needs to modify the static Nginx configuration file.

Image

Both methods have their own advantages, but the separation of Kong Ingress Controller has the benefit that the two containers are independent of each other, allowing for separate upgrades and maintenance, which is more friendly to operations.

Kong Ingress Controller also creates two Service objects. The “kong-proxy” service is responsible for routing traffic and is defined as the “LoadBalancer” type, indicating that it is intended to expose services externally in a production environment. However, in our experimental environment (whether it is minikube or kubeadm), we can only use the NodePort form. Here, you can see that port 80 is mapped to port 32201 on the node.

Now, let’s try accessing Kong Ingress Controller using the IP address of the worker node. If you are using minikube like me, you can simply obtain it using $(minikube ip):

curl $(minikube ip):32201 -i

Image

From the response obtained from curl, we can see that Kong Ingress Controller 2.7 internally uses Kong version 3.0.1. Since we have not configured any Ingress resources for it, it returns a status code of 404, which is normal.

We can also use kubectl exec command to enter the Pod and view its internal information:

kubectl exec -it -n kong <pod-name> -- sh

Image

Although Kong Ingress Controller has two containers, we do not need to explicitly specify the container using the -c option. It will automatically enter the default Proxy container (the other Controller container cannot be accessed because it does not include a shell).

Using Kong Ingress Controller #

Now that we have installed Kong Ingress Controller, let’s see how to use it. Similar to Lesson 21, we will not use the default Ingress Controller. Instead, we will create a new instance using Ingress Class to better understand and grasp the usage of Kong Ingress Controller.

First, we define the backend application, still using Nginx for simulation. The process is similar to what was explained in Lesson 20. We define the configuration file using ConfigMap and load it into the Nginx Pod. Then, we deploy the Deployment and Service. Since you are already familiar with this, I will not provide the YAML code. Let’s only take a look at the screenshot after running the command:

Screenshot

You can see that I created two Nginx Pods, and the name of the Service object is ngx-svc.

Next, we define the Ingress Class. The name is “kong-ink”, and the value of the “spec.controller” field is the name of Kong Ingress Controller, “ingress-controllers.konghq.com/kong”. You can refer to Lesson 21 for the YAML format:

apiVersion: networking.k8s.io/v1
kind: IngressClass
metadata:
  name: kong-ink

spec:
  controller: ingress-controllers.konghq.com/kong

Then, we define the Ingress object. We can use kubectl create to generate a YAML template file, using --rule to specify the routing rule and --class to specify the Ingress Class:

kubectl create ing kong-ing --rule="kong.test/=ngx-svc:80" --class=kong-ink $out

The generated Ingress object will look something like this. The domain name is “kong.test”, and the traffic will be forwarded to the backend ngx-svc service:

apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
  name: kong-ing

spec:
  ingressClassName: kong-ink

  rules:
  - host: kong.test
    http:
      paths:
      - path: /
        pathType: Prefix
        backend:
          service:
            name: ngx-svc
            port:
              number: 80

Finally, we need to separate the definition of the Ingress Controller from the all-in-one-dbless.yaml file. It’s actually quite simple. Just search for “Deployment” and make a copy of it, along with the related Service code, and save it as “kic.yml”.

Of course, the copied code is exactly the same as the default Kong Ingress Controller, so we must make some modifications based on the documentation. Here are the key points:

  • Rename the metadata.name in Deployment and Service. For example, change them to ingress-kong-dep and ingress-kong-svc.
  • Modify spec.selector and template.metadata.labels to match the new names. Generally, they should be the same as the Deployment name, i.e., ingress-kong-dep.
  • The first container is the traffic proxy. The image inside it can be changed to any supported version, such as Kong:2.7, Kong:2.8, or Kong:3.1.
  • The second container is the rule management controller. Use the environment variable “CONTROLLER_INGRESS_CLASS” to specify the new Ingress Class name kong-ink and use “CONTROLLER_PUBLISH_SERVICE” to specify the Service name ingress-kong-svc.
  • Modify the Service object to use NodePort as the type for easier testing.

After making these changes, a new Kong Ingress Controller is ready. The modifications are as follows, and I have added comments to indicate the changes:

apiVersion: apps/v1
kind: Deployment
metadata:
  name: ingress-kong-dep              # Rename
  namespace: kong
spec:
  replicas: 1
  selector:
    matchLabels:
      app: ingress-kong-dep            # Rename
  template:
    metadata:
      labels:
        app: ingress-kong-dep            # Rename
    spec:
      containers:
      - env:                             # First container, Proxy
        ...
        image: kong:3.1                   # Change image

      - env:                             # Second container, Controller
        - name: CONTROLLER_INGRESS_CLASS
          value: kong-ink                       # Change Ingress Class
        - name: CONTROLLER_PUBLISH_SERVICE
          value: kong/ingress-kong-svc          # Change Service
        ...
---

apiVersion: v1
kind: Service
metadata:
  name: ingress-kong-svc              # Rename
  namespace: kong
spec:
  ...
  selector:
    app: ingress-kong-dep             # Rename
  type: NodePort                      # Change type

In our accompanying GitHub project for this column, you can also find the modified YAML files directly. Once you have prepared all of this, we can test and verify the Kong Ingress Controller:

kubectl apply -f ngx-deploy.yml
kubectl apply -f ingress.yml
kubectl apply -f kic.yml

Screenshot

The screenshot shows the results of creating these objects. The NodePort port of the new Service object is 32521.

Next, we will use curl to send an HTTP request. Note that you should use the “–resolve” or “-H” parameter to specify the domain name “kong.test” defined in the Ingress, otherwise Kong Ingress Controller will not find the route:

curl $(minikube ip):32521 -H 'host: kong.test' -v

Screenshot

As you can see, Kong Ingress Controller correctly applied the Ingress routing rules and returned the response data from the backend Nginx application. Additionally, you can see from the “Via” response header that it is now using Kong 3.1.

Extending Kong Ingress Controller #

By now, you should have mastered the basic usage of Kong Ingress Controller.

However, using the standard Kubernetes Ingress resource alone to manage traffic is not enough to unleash the true power of Kong Ingress Controller. It has many useful and practical enhancement features.

In [Lesson 27], we mentioned annotations, which are a convenient way for Kubernetes to provide extension capabilities for resource objects. So, with annotations, we can allow Kong Ingress Controller to better utilize the internal Kong to manage traffic without modifying the definition of Ingress itself.

Currently, Kong Ingress Controller supports adding annotations to Ingress and Service objects. You can refer to the relevant documentation on the official website (https://docs.konghq.com/kubernetes-ingress-controller/2.7.x/references/annotations/) for more details. Here, I will only introduce two annotations.

The first one is “konghq.com/host-aliases”, which allows you to add additional domain names to Ingress rules.

You may know that wildcards * can be used in Ingress domain names, such as *.abc.com. However, the problem is that * can only be used as a prefix and not as a suffix. In other words, we cannot have a domain name like abc.*, which can be a bit inconvenient when managing multiple domain names.

With “konghq.com/host-aliases”, we can “bypass” this limitation and easily match Ingress with domain names that have different suffixes.

For example, let’s modify the Ingress definition and add an annotation in the “metadata” section to make it support not only “kong.test” but also “kong.dev” and “kong.ops” domain names, like this:

apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
  name: kong-ing
  annotations:
    konghq.com/host-aliases: "kong.dev, kong.ops"  #注意这里
spec:
  ...

Use kubectl apply to update the Ingress, and then use curl to test it:

Image

You will find that the Ingress now supports these new domain names.

The second one is “konghq.com/plugins”, which allows you to enable various plugins included in the Kong Ingress Controller.

Plugins are a distinctive feature of Kong Ingress Controller. You can think of them as “pre-made artifacts” that can be attached during the process of traffic forwarding, implementing various data processing operations. The plugin mechanism is open, which means we can use both official and third-party plugins, as well as write plugins in languages like Lua or Go to meet specific needs.

Kong maintains a certified plugin hub (https://docs.konghq.com/hub/), where you can find over 100 plugins related to authentication, security, traffic control, analytics, logging, and more. Today, we’ll look at two commonly used plugins: Response Transformer and Rate Limiting.

Image

The Response Transformer plugin allows modifications to the response data by adding, replacing, or removing response headers or body. The Rate Limiting plugin, as its name suggests, sets limits on client access in terms of requests per time unit.

To define a plugin, we need to use a CRD resource called “KongPlugin”. You can use commands like kubectl api-resources or kubectl explain to view its apiVersion, kind, and other information:

Image

Below are example definitions for these two plugins:

apiVersion: configuration.konghq.com/v1
kind: KongPlugin
metadata:
  name: kong-add-resp-header-plugin

plugin: response-transformer
config:
  add:
    headers:
    - Resp-New-Header:kong-kic

---

apiVersion: configuration.konghq.com/v1
kind: KongPlugin
metadata:
  name: kong-rate-limiting-plugin

plugin: rate-limiting
config:
  minute: 2

The KongPlugin object, being a custom resource, is different from standard Kubernetes objects. It does not use the “spec” field but uses “plugin” to specify the plugin name and “config” to specify the plugin’s configuration parameters.

For example, here I let the Response Transformer plugin add a new response header field and the Rate Limiting plugin limit client access to two requests per minute.

Once these two plugin objects are defined, we can use annotations to enable the plugin functionality in the Ingress object:

apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
  name: kong-ing
  annotations:
    konghq.com/plugins: |
        kong-add-resp-header-plugin,
        kong-rate-limiting-plugin        

Now let’s apply these plugin objects and update the Ingress:

kubectl apply -f crd.yml

And then send a curl request:

curl $(minikube ip):32521 -H 'host: kong.test' -i

Image

You will see that the response headers have some new fields, where RateLimit-* represents the rate limiting information, and Resp-New-Header is the newly added response header field.

Execute the curl command continuously a few times, and you will see that the rate limiting plugin takes effect:

Image

Kong Ingress Controller will return a 429 error, indicating that access is limited, and the “Retry-After” field is used to indicate the number of seconds to wait before sending new requests.

Summary #

Alright, today we learned another tool for managing ingress and egress traffic in Kubernetes: Kong Ingress Controller. Here are the key points:

  1. The underlying core of Kong Ingress Controller is still Nginx, but it is based on OpenResty and LuaJIT, which enables fully dynamic management of routing without the need for reloads.
  2. Kong Ingress Controller can be easily installed using a “no-database” approach. It consists of two containers that make up a Pod.
  3. Kong Ingress Controller supports standard Ingress resources but also provides more advanced features through annotations and CRDs, especially plugins. These plugins enable flexible loading and unloading, allowing for complex traffic management policies.

As a CNCF cloud-native project, Kong Ingress Controller has been widely used and recognized. Furthermore, during its recent development, it has started supporting the new Gateway API. We can discuss this in more detail next time we have the opportunity.

Homework #

Finally, it’s time for homework. I have two questions for you to think about:

  1. Can you compare Kong Ingress Controller and Nginx Ingress Controller? What aspects of their performance do you value?
  2. What are the benefits of the plugin mechanism, and can you list some similar projects in other fields?

Long time no see! Looking forward to hearing your thoughts. Let’s discuss in the comments section.

Image