44 Open Resty's Killer Move Dynamic Handling

44 OpenResty’s Killer Move- Dynamic Handling #

Hello, I’m Wen Ming.

So far, I’ve almost finished introducing the performance-related content of OpenResty. I believe that mastering and flexibly applying these optimization techniques will definitely improve the performance of your code by an order of magnitude. Today, in the final part of performance optimization, I will talk about a widely underestimated capability in OpenResty: dynamic.

Let’s first understand what dynamic means and how it relates to performance.

In this context, dynamic refers to the ability of a program to modify parameters, configurations, and even its own code at runtime without the need for reloading. In the context of Nginx and OpenResty, achieving modifications to upstream, SSL certificates, rate limiting, and throttling without restarting the service is considered dynamic. As for the relationship between dynamic and performance, it is quite obvious that if these types of operations cannot be done dynamically, frequent reloading of the Nginx service will inevitably result in performance loss.

However, we know that the open-source version of Nginx does not support dynamic features. Therefore, if you want to make changes to upstream or SSL certificates, you have to modify the configuration file and restart the service for the changes to take effect. The commercial version of Nginx Plus provides some dynamic capabilities through the use of REST API, but this can be considered an incomplete improvement at best.

In OpenResty, however, these limitations do not exist, and dynamic capabilities can be considered its killer feature. You may wonder why OpenResty, which is based on Nginx, can support dynamic features. The reason is quite simple: Nginx’s logic is implemented through C modules, while OpenResty is implemented using the Lua scripting language. One major advantage of script languages is the ability to dynamically change behavior at runtime.

Dynamically Loading Code #

Now let’s take a look at how to dynamically load Lua code in OpenResty:

resty -e '
local s = [[ngx.say("hello world")]]
local func, err = loadstring(s)
func()'

You’re not mistaken. With just a few lines of code, you can turn a string into a Lua function and execute it. Let’s break down these lines of code:

  • Firstly, we declare a string that contains a valid Lua code to print hello world.
  • Then, we use the loadstring function in Lua to convert the string object into a function object called func.
  • Finally, we call func by adding parentheses after its name to execute it and print hello world.

Of course, based on this code snippet, we can expand it to create even more interesting and practical functionalities. Next, I’ll show you some “fresh” examples.

Feature 1: FaaS #

First, let’s talk about Function as a Service (FaaS), which has been a popular technology trend in recent years. Let’s see how to implement it in OpenResty. In the code snippet below, the string is a piece of Lua code, which can be converted into a Lua function:

local s = [[
    return function()
        ngx.say("hello world")
    end
]]

As we’ve mentioned before, functions in Lua are first-class citizens. This piece of code returns an anonymous function. To execute this anonymous function, we use pcall to provide a layer of protection. pcall runs the function in protected mode and captures any exceptions. If it succeeds, it returns true and the result of the execution; otherwise, it returns false and the error message. Here is the following code:

local func1, err = loadstring(s)
local ret, func = pcall(func1)

By combining the above two parts, we can get a complete and executable example:

resty -e 'local s = [[
    return function()
        ngx.say("hello world")
    end
]]
local  func1 = loadstring(s)
local ret, func = pcall(func1)
func()'

Furthermore, we can modify the string s, which contains the function, to a form that can be specified by the user, and add conditions for executing it. This is actually the prototype of FaaS. Here, I provide a complete implementation here. If you are interested in FaaS and want to further study it, I recommend you to use this link for in-depth learning.

Feature 2: Edge Computing #

OpenResty’s dynamism can not only be used for FaaS, finely granulating script language dynamics to the function level, but can also leverage dynamic advantages in edge computing.

Thanks to Nginx and LuaJIT’s strong multi-platform support features, OpenResty can not only run on the X86 architecture, but also has excellent support for ARM. At the same time, OpenResty supports both layer 7 and layer 4 proxies, allowing common protocols to be parsed and proxied by OpenResty, including several protocols used in IoT.

Because of these advantages, we can extend OpenResty’s reach from server-side domains such as API gateways, WAFs, and web servers to edge nodes closest to users, such as IoT devices, CDN edge nodes, and routers.

This is not just a fantasy. In fact, OpenResty has already been widely used in the above-mentioned domains. Taking CDN edge nodes as an example, CloudFlare, one of the largest users of OpenResty, has long used OpenResty’s dynamic features to achieve dynamic control over CDN edge nodes.

CloudFlare’s approach is similar to the principle of dynamically loading code mentioned earlier, and can be roughly divided into the following steps:

  • First, obtain the changed code files from a key-value database cluster, which can be done by background timer polling or by subscribing to changes using a “publish-subscribe” pattern.
  • Then, replace the old files on the local disk with the updated code files, and update the cache loaded in memory using loadstring and pcall.

As a result, the next terminal request to be processed will follow the updated code logic.

Of course, the actual application needs to consider more details than the steps above, such as version control and rollback, exception handling, network interruptions, edge node restarts, etc. But the overall process remains the same.

If we move CloudFlare’s approach from CDN edge nodes to other edge scenarios, we can dynamically allocate a lot of computing power to edge devices. This not only fully utilizes the computing power of edge nodes, but also allows users to receive faster responses. Because edge nodes process the original data and then aggregate it to remote servers, greatly reducing the amount of data transmission.

However, to successfully implement FaaS and edge computing, OpenResty’s dynamism is just a good foundation. You also need to consider the completeness of the surrounding ecosystem and the participation of vendors. This goes beyond the realm of technology.

Dynamic Upstream #

Now, let’s turn our attention back to OpenResty and see how to implement dynamic upstream. lua-resty-core provides the ngx.balancer library to set the upstream, which needs to be executed in the balancer phase of OpenResty:

balancer_by_lua_block {
    local balancer = require "ngx.balancer"
    local host = "127.0.0.2"
    local port = 8080

    local ok, err = balancer.set_current_peer(host, port)
    if not ok then
        ngx.log(ngx.ERR, "failed to set the current peer: ", err)
        return ngx.exit(500)
    end
}

Let me explain briefly. The set_current_peer function is used to set the IP address and port of the upstream. However, please note that domain names are not supported here. You need to use the lua-resty-dns library to resolve the domain names to IP addresses.

However, ngx.balancer is quite low-level. Although it has the ability to set the upstream, implementing dynamic upstream is not as simple as that. Therefore, two additional functionalities are needed before ngx.balancer:

  • The selection algorithm for upstream, whether it is consistent hashing or round-robin.
  • The health check mechanism for upstream, which needs to remove unhealthy upstreams and add them back when they become healthy again.

The official lua-resty-balancer library from OpenResty includes two types of algorithms, resty.chash and resty.roundrobin, to achieve the first functionality. And there is lua-resty-upstream-healthcheck to attempt to achieve the second functionality.

However, there are still two issues here.

First, it lacks a complete implementation for the final mile. Integrating ngx.balancer, lua-resty-balancer, and lua-resty-upstream-healthcheck to achieve dynamic upstream requires some effort, which discourages many developers.

Second, the implementation of lua-resty-upstream-healthcheck is not complete. It only has passive health checks and lacks active health checks.

Let me explain briefly. Here, passive health check means that it is triggered by requests from terminals, and the analysis of the upstream’s response is used as the criteria for determining its health. If there are no requests from terminals, there is no way to know if the upstream is healthy. Active health check can make up for this deficiency by using ngx.timer to periodically poll the specified upstream interface to check its health status.

Therefore, in practical applications, we usually recommend using the lua-resty-healthcheck library to perform health checks on upstream. Its advantage is that it includes both active and passive health checks and has been validated in multiple projects, providing higher reliability.

Furthermore, APISIX, an emerging microservice API gateway, has implemented dynamic upstream based on lua-resty-healthcheck. We can refer to its implementation. It only consists of around 200 lines of code, and you can easily extract it and use it in your own project.

In Conclusion #

After discussing so much, I would like to leave you with a question to ponder. In terms of OpenResty’s capabilities, where do you think it can excel in other areas and scenarios? Keep in mind that each section introduced in this chapter can be further analyzed in more detail and depth.

Feel free to leave a comment and discuss with me. Also, feel free to share this article with others to learn and progress together.