15 Why Principles and Code Implementation of Lru Algorithm Are Different

15 Why Principles and Code Implementation of LRU Algorithm Are Different #

Starting from this lesson, we will enter the third module of the course: the cache module. In the next three lessons, I will give you a detailed introduction to the implementation of LRU and LFU algorithms in the Redis source code, as well as the impact of lazy deletion on caching.

By learning this part, on one hand, you can master how these classic caching algorithms are designed and implemented in a practical system. On the other hand, you can also learn an important principle in computer system design and implementation, which is to balance algorithm complexity and implementation complexity during the process of system design and development. Additionally, you can also learn how cache replacement and lazy deletion release Redis memory. Memory resources are very valuable to Redis, so by mastering this, you can effectively reduce Redis’s memory usage problems.

Alright, so in today’s lesson, let’s start by learning the implementation of the LRU algorithm in Redis.

The Basic Principles of LRU Algorithm #

First, we need to understand the basic principles of the LRU (Least Recently Used) algorithm. LRU algorithm is a classic caching algorithm.

In terms of basic principles, the LRU algorithm uses a linked list to maintain the access status of each data in the cache. It adjusts the position of the data in the linked list according to the real-time access of the data. The position of the data in the linked list indicates whether the data has been recently accessed or has not been accessed for a while.

Specifically, the LRU algorithm sets the head and tail of the linked list as the Most Recently Used (MRU) end and Least Recently Used (LRU) end, respectively. The MRU end represents the data that has just been accessed, while the LRU end represents the data that has been accessed the least recently.

I have introduced the execution process of the LRU algorithm in the first season’s course. Here, let’s briefly review it. The execution of the LRU algorithm can be divided into three cases.

  • Case 1: When a new data is inserted, the LRU algorithm will insert the data into the head of the linked list, and move the original head and subsequent data one position towards the tail.
  • Case 2: When a data is accessed for the first time, the LRU algorithm will move the data from its current position in the linked list to the head. At the same time, other data from the head to its current position will be moved one position towards the tail.
  • Case 3: When the length of the linked list can no longer accommodate more data, if new data is inserted, the LRU algorithm will remove the data from the tail of the linked list, which is equivalent to eliminating the data from the cache.

The following figure shows the second case of the execution process of the LRU algorithm. In this case, the length of the linked list is 5, and the data stored from the head to the tail of the linked list are 5, 33, 9, 10, 8. Suppose data 9 is accessed once, then 9 will be moved to the head of the linked list, and data 5 and 33 will be moved one position towards the tail.

So you can see that if you strictly implement the basic principles of the LRU algorithm, you need to implement the following in your code:

  • Maintain a linked list for all data that Redis can accommodate when using maximum memory.
  • Perform multiple linked list operations whenever new data is inserted or existing data is accessed again.

If Redis stores a large amount of data, these two parts of code implementation will require additional memory space to store the linked list and will affect the Redis access performance due to data movement and linked list operations.

Therefore, in order to save precious memory space and maintain high Redis performance, the Redis source code does not strictly implement the basic principles of the LRU algorithm, but provides an implementation of an approximate LRU algorithm.

Next, let’s understand how this approximate LRU algorithm works.

Implementation of Approximate LRU Algorithm in Redis #

However, before understanding the implementation of the approximate LRU algorithm in Redis, let’s first take a look at how Redis enables the approximate LRU algorithm in its memory eviction mechanism. This can help us understand the configuration options related to the approximate LRU algorithm.

In fact, this is related to two configuration parameters in the Redis configuration file redis.conf:

  • maxmemory: This configuration option sets the maximum memory capacity that the Redis server can use. Once the actual memory usage exceeds this threshold, the server will perform memory eviction operations based on the strategy defined by the maxmemory-policy configuration option.
  • maxmemory-policy: This configuration option sets the memory eviction strategy of the Redis server, including approximate LRU algorithm, LFU algorithm, eviction based on TTL value, and random eviction.

Therefore, once we set the maxmemory option and configure maxmemory-policy as allkeys-lru or volatile-lru, the approximate LRU algorithm is enabled. Here, you need to note that both allkeys-lru and volatile-lru use the approximate LRU algorithm for data eviction, but the difference between them is: when using the allkeys-lru strategy to evict data, it selects the data to be evicted from all key-value pairs, while the volatile-lru strategy selects the data to be evicted from key-value pairs with expiration time.

Okay, now that we understand how to enable the approximate LRU algorithm, let’s learn about the specific implementation of the approximate LRU algorithm in Redis. Here, for your convenience, I have divided the Redis implementation of the approximate LRU algorithm into three parts.

  • Calculation of the Global LRU Clock Value: This part includes how Redis calculates the global LRU clock value in order to determine the recency of data access for the effect of the approximate LRU algorithm.
  • Initialization and Update of the Key-Value Pair LRU Clock Value: This part includes in which functions of the Redis source code the LRU clock value for each key-value pair is initialized and updated.
  • Actual Execution of the Approximate LRU Algorithm: This part includes how Redis source code executes the approximate LRU algorithm, i.e., when data eviction is triggered, and how the actual eviction mechanism is implemented.

Now, let’s start by looking at the calculation of the global LRU clock value.

Calculation of the Global LRU Clock Value #

Although Redis uses the approximate LRU algorithm, it still needs to distinguish the recency of access for different data. In other words, Redis needs to know the most recent access time of the data. Therefore, Redis uses an LRU clock to record the timestamp of each access to the data.

As we learned in [Lesson 4], Redis uses the redisObject structure to save a pointer to the value for each key-value pair in the source code. In addition to storing the pointer to the value, the redisObject structure actually uses 24 bits to store the LRU clock information, which corresponds to the lru member variable. Therefore, each key-value pair will have its most recent access timestamp recorded in the lru variable.

The definition of the redisObject structure is in server.h, which includes the definition of the lru member variable. You can take a look at it.

typedef struct redisObject {
    unsigned type:4;
    unsigned encoding:4;
    unsigned lru:LRU_BITS;  // Records LRU information, LRU_BITS is defined as 24 bits
    int refcount;
    void *ptr;
} robj;

So how is the LRU clock value calculated for each key-value pair? In fact, Redis server uses a global LRU clock at the instance level, and the LRU clock value for each key-value pair is set based on this global LRU clock.

This global LRU clock is stored in the member variable lruclock of the Redis global variable server. When Redis server is started and the initServerConfig function is called to initialize various parameters, the value of this global LRU clock lruclock will be set. Specifically, the initServerConfig function calls the getLRUClock function to set the value of lruclock, as shown below:

void initServerConfig(void) {
...
unsigned int lruclock = getLRUClock(); // Call the getLRUClock function to calculate the global LRU clock value
atomicSet(server.lruclock, lruclock); // Set lruclock to the just calculated LRU clock value
...
}

Therefore, the global LRU clock value is calculated through the getLRUClock function.

The getLRUClock function is implemented in the evict.c file. It calls the mstime function (in the server.c file) to obtain the UNIX timestamp calculated in milliseconds, and then divides this UNIX timestamp by the macro definition LRU_CLOCK_RESOLUTION. The macro definition LRU_CLOCK_RESOLUTION is defined in the server.h file, and it represents the LRU clock resolution in milliseconds, which is the minimum unit in which the LRU clock is represented in milliseconds.

Because the default value of LRU_CLOCK_RESOLUTION is 1000, the LRU clock resolution is 1000 milliseconds, which is 1 second.

So, what you need to pay attention to is that if the time interval between two consecutive accesses to a piece of data is less than 1 second, the timestamps of these two accesses will be the same, because the precision of the LRU clock is 1 second and it cannot distinguish different timestamps with intervals less than 1 second.

Now that you have understood the meaning of the macro definition LRU_CLOCK_RESOLUTION, let’s take a look at the calculation in the getLRUClock function.

First, the getLRUClock function divides the obtained UNIX timestamp by LRU_CLOCK_RESOLUTION to obtain the UNIX timestamp calculated with the LRU clock precision, which is the current LRU clock value.

Then, the getLRUClock function performs a bitwise AND operation between the LRU clock value and the macro definition LRU_CLOCK_MAX, where LRU_CLOCK_MAX represents the maximum value that the LRU clock can represent.

The following code shows the macro definitions and the execution logic of the getLRUClock function mentioned earlier, which you can take a look at.

#define LRU_BITS 24
#define LRU_CLOCK_MAX ((1<<LRU_BITS)-1)  // Maximum value that the LRU clock can represent
#define LRU_CLOCK_RESOLUTION 1000 // LRU clock resolution in milliseconds

unsigned int getLRUClock(void) {
return (mstime() / LRU_CLOCK_RESOLUTION) & LRU_CLOCK_MAX;

}

So now you know that by default, the global LRU clock value in Redis is calculated as a UNIX timestamp with a precision of 1 second, and it is initialized in the initServerConfig function. Now you might be wondering: how is the global LRU clock value updated during the running of the Redis server?

This is related to the serverCron function, which corresponds to the time event that Redis server runs regularly in its event-driven framework.

As a callback function for time events, the serverCron function itself is executed periodically at a certain frequency. The frequency is determined by the hz configuration option in the Redis configuration file redis.conf. The default value of hz is 10, which means the serverCron function runs every 100 milliseconds (1 second / 10 = 100 milliseconds).

In this way, in the serverCron function, the global LRU clock value is periodically updated by calling the getLRUClock function according to the frequency of this function, as shown below:

int serverCron(struct aeEventLoop *eventLoop, long long id, void *clientData) {
...
unsigned long lruclock = getLRUClock(); // By default, getLRUClock function is called every 100 milliseconds to update the global LRU clock value
atomicSet(server.lruclock,lruclock); // Set the lruclock variable
...
}

Therefore, each key-value pair can obtain the latest access timestamp from the global LRU clock value.

Now, let’s understand where the initialization and update of the LRU clock value for each key-value pair occur.

Initialization and update of the key-value pair LRU clock value #

First, for a key-value pair, its LRU clock value is initially set when this pair is created. This initialization is called in the createObject function. The createObject function is implemented in the object.c file. When Redis creates a key-value pair, it calls this function.

In addition to allocating memory space for the redisObject structure, the createObject function also initializes the lru variable in the redisObject structure based on the value of the maxmemory_policy configuration option I mentioned earlier.

Specifically, if the maxmemory_policy configuration is set to use the LFU policy, the lru variable is initialized as the calculated value of the LFU algorithm (I will explain the implementation of the LFU algorithm in the next lesson). If the maxmemory_policy configuration does not use the LFU policy, the createObject function calls the LRU_CLOCK function to set the value of the lru variable, which is the LRU clock value corresponding to the key-value pair.

The following code shows this part of the execution logic:

robj *createObject(int type, void *ptr) {
    robj *o = zmalloc(sizeof(*o));
    ...
    // If the cache eviction policy is LFU, set the lru variable to the LFU count value
    if (server.maxmemory_policy & MAXMEMORY_FLAG_LFU) {
        o->lru = (LFUGetTimeInMinutes()<<8) | LFU_INIT_VAL;
    } else {
        o->lru = LRU_CLOCK();  // Otherwise, call the LRU_CLOCK function to get the LRU clock value
    }
    return o;
}

Now, a new question arises: When is the LRU clock value of a key-value pair updated again?

In fact, once a key-value pair is accessed, its LRU clock value is updated. When a key-value pair is accessed, the access operation eventually calls the lookupKey function.

The lookupKey function is implemented in the db.c file. It looks up the key-value pair from the global hash table. If the key-value pair exists, the lookupKey function updates the LRU clock value of the key-value pair according to the configuration value of maxmemory_policy, which is the access timestamp.

When maxmemory_policy is not configured as the LFU policy, the lookupKey function calls the LRU_CLOCK function to get the current global LRU clock value and assigns it to the lru variable in the redisObject structure of the key-value pair, as shown below:

robj *lookupKey(redisDb *db, robj *key, int flags) {
    dictEntry *de = dictFind(db->dict,key->ptr); // Look up the key-value pair
    if (de) {
        robj *val = dictGetVal(de);  // Get the redisObject structure corresponding to the key-value pair
        ...
        if (server.maxmemory_policy & MAXMEMORY_FLAG_LFU) {
            updateLFU(val);  // If LFU policy is used, update the LFU count value
        } else {
            val->lru = LRU_CLOCK();  // Otherwise, call the LRU_CLOCK function to get the global LRU clock value
        }
       ...
}}

In this way, once a key-value pair is accessed, it can obtain the latest access timestamp. But now you may ask: how are these access timestamps used to approximate the LRU algorithm and perform data eviction?

Next, let’s learn how the actual execution process of the approximate LRU algorithm works.

Execution of the Approximate LRU Algorithm #

Now that we know that Redis implements the approximate LRU algorithm to reduce memory and operation time overhead, we can understand the execution process of the algorithm from two perspectives:

  • When is the algorithm triggered?
  • How is the algorithm executed?

When is the algorithm triggered? #

Firstly, the main logic of the approximate LRU algorithm is implemented in the freeMemoryIfNeeded function, which is located in the evict.c file.

The freeMemoryIfNeeded function is called by the freeMemoryIfNeededAndSafe function (also in the evict.c file), which is in turn called by the processCommand function. Below is a diagram showing the relationship between these three functions.

So, when we see the processCommand function, we should know that this function is called by Redis for each command execution. I have already introduced the processCommand function in [Lesson 14], and you can review it again.

When the processCommand function is executed, it actually determines whether to call the freeMemoryIfNeededAndSafe function based on two conditions.

  • Condition 1: The maxmemory configuration option is set to a non-zero value.
  • Condition 2: The Lua script is not running timeout.

If both conditions are met, the freeMemoryIfNeededAndSafe function will be called by the processCommand function, as shown below:

if (server.maxmemory && !server.lua_timedout) {
        int out_of_memory = freeMemoryIfNeededAndSafe() == C_ERR;
...

Then, the freeMemoryIfNeededAndSafe function will further determine whether to call the freeMemoryIfNeeded function based on two conditions.

  • Condition 1: The Lua script is running timeout.
  • Condition 2: Redis server is loading data.

In other words, the freeMemoryIfNeeded function will only be called if neither of these two conditions is met. The following code shows the execution logic of the freeMemoryIfNeededAndSafe function:

int freeMemoryIfNeededAndSafe(void) {
    if (server.lua_timedout || server.loading) return C_OK;
    return freeMemoryIfNeeded();
}

Therefore, once the freeMemoryIfNeeded function is called, and the maxmemory-policy is set to either allkeys-lru or volatile-lru, the approximate LRU algorithm starts to be executed. Next, let’s see how the approximate LRU algorithm is executed, which means understanding the main execution flow of the freeMemoryIfNeeded function.

How is the approximate LRU algorithm executed? #

The execution of the approximate LRU algorithm can be divided into three major steps: evaluating the current memory usage, updating the set of candidate key-value pairs to be evicted, and selecting and deleting the evicted key-value pair. Let’s look at each step in detail.

  • Evaluating the current memory usage

Firstly, the freeMemoryIfNeeded function calls the getMaxmemoryState function to evaluate the current memory usage. The getMaxmemoryState function is implemented in the evict.c file, and it determines whether the memory capacity used by the Redis server exceeds the value configured for maxmemory.

If the current memory usage does not exceed maxmemory, the getMaxmemoryState function returns C_OK, and subsequently, the freeMemoryIfNeeded function directly returns.

int freeMemoryIfNeeded(void) {
...
if (getMaxmemoryState(&mem_reported,NULL,&mem_tofree,NULL) == C_OK)
        return C_OK;
...
}

Here, it’s worth noting that when evaluating the current memory usage, if maxmemory is exceeded, the getMaxmemoryState function calculates the amount of memory that needs to be freed. This amount is equal to the used memory minus maxmemory. However, the used memory does not include the replication buffer size used for master-slave replication. This is calculated by the getMaxmemoryState function through a call to the freeMemoryGetNotCountedMemory function.

I have included the basic execution logic of the getMaxmemoryState function below for reference.

int getMaxmemoryState(size_t *total, size_t *logical, size_t *tofree, float *level) {
...
mem_reported = zmalloc_used_memory(); // Calculate the used memory
...
// Deduct the replication buffer size used for master-slave replication from the used memory
mem_used = mem_reported;
size_t overhead = freeMemoryGetNotCountedMemory();
mem_used = (mem_used > overhead) ? mem_used-overhead : 0;
...
// Calculate the amount of memory to be freed
mem_tofree = mem_used - server.maxmemory;
...
}

If the current memory usage does exceed the upper limit of maxmemory, the freeMemoryIfNeeded function enters a while loop to evict data and free memory.

To evict data, Redis defines an array called EvictionPoolLRU to store the candidate key-value pairs. This array consists of elements of the evictionPoolEntry structure, which stores information such as the idle time of the candidate key-value pair, the corresponding key, and the cached SDS object. The EvictionPoolLRU array and evictionPoolEntry structure are defined in the evict.c file, as shown in the following code:

static struct evictionPoolEntry *EvictionPoolLRU;

struct evictionPoolEntry {
    unsigned long long idle;    // Idle time of the candidate key-value pair
    sds key;                    // Key of the candidate key-value pair to be evicted
    sds cached;                 // Cached SDS object
    int dbid;                   // ID of the database where the candidate key-value pair's key resides
};

In this way, when the Redis server executes the initServer function to initialize, it will call the evictionPoolAlloc function (in the evict.c file) to allocate memory space for the EvictionPoolLRU array. The size of this array is determined by the macro EVPOOL_SIZE (in the evict.c file) and is set to 16 elements by default, which means it can hold 16 key-value pairs to be evicted.

Then, in the eviction loop of the freeMemoryIfNeeded function, this set of key-value pairs to be evicted, namely the EvictionPoolLRU array, will be updated. Now I will give you a detailed introduction.

  • Updating the set of key-value pairs to be evicted

Firstly, the freeMemoryIfNeeded function calls the evictionPoolPopulate function (in the evict.c file), which in turn calls the dictGetSomeKeys function (in the dict.c file) to randomly retrieve a certain number of keys from the sampled hash table. However, there are two points you need to pay attention to here.

The first point is that the sampled hash table in the dictGetSomeKeys function is determined by the maxmemory_policy configuration item. If maxmemory_policy is set to allkeys_lru, then the sampled hash table will be the global hash table of the Redis server, which means sampling will be performed among all key-value pairs. Otherwise, the sampled hash table will be the hash table that stores keys with expiration times.

Below is the code showing the calling process of the evictionPoolPopulate function within the freeMemoryIfNeeded function, you can take a look.

for (i = 0; i < server.dbnum; i++) {
   db = server.db+i;   // iterate over each database on the Redis server
   // Determine whether to use the global hash table or the hash table with expiration times based on the eviction policy
   dict = (server.maxmemory_policy & MAXMEMORY_FLAG_ALLKEYS) ? db->dict : db->expires;
   if ((keys = dictSize(dict)) != 0) {
       // Pass the selected dict and the global hash table to the evictionPoolPopulate function
       evictionPoolPopulate(i, dict, db->dict, pool);
       total_keys += keys;
    }
}

The second point is that the number of keys sampled by the dictGetSomeKeys function is determined by the maxmemory-samples configuration item in redis.conf, with a default value of 5. The code below shows the invocation of the dictGetSomeKeys function within the evictionPoolPopulate function:

void evictionPoolPopulate(int dbid, dict *sampledict, dict *keydict, struct evictionPoolEntry *pool) {
    ...
    dictEntry *samples[server.maxmemory_samples];  // Sampled set with size of maxmemory_samples
    // Pass the sampledict, samples set, and the number of samples (maxmemory_samples) as parameters to dictGetSomeKeys
    count = dictGetSomeKeys(sampledict,samples,server.maxmemory_samples);
    ...
}

With this, the dictGetSomeKeys function can return the set of sampled key-value pairs. Then, the evictionPoolPopulate function will continue with a loop based on the actual number of sampled key-value pairs.

During this loop process, the evictionPoolPopulate function will call the estimateObjectIdleTime function to calculate the idle time of each key-value pair in the sampled set, as shown below:

for (j = 0; j < count; j++) {
...
if (server.maxmemory_policy & MAXMEMORY_FLAG_LRU) {
            idle = estimateObjectIdleTime(o);
}
...

Next, the evictionPoolPopulate function will iterate over the set of key-value pairs to be evicted, namely the EvictionPoolLRU array. During this iteration, it will attempt to insert each sampled key-value pair into the EvictionPoolLRU array, mainly depending on one of the following two conditions:

  • First, it can find an empty slot in the array that has not been filled with a key-value pair.
  • Second, it can find a key-value pair in the array that has a shorter idle time than the idle time of the sampled key-value pair.

If either of these conditions is met, the evictionPoolPopulate function can insert the sampled key-value pair into the EvictionPoolLRU array. After handling all the sampled key-value pairs, the evictionPoolPopulate function completes the update of the candidate key-value pairs to be evicted.

Next, the freeMemoryIfNeeded function can start selecting the key-value pair to be evicted.

  • Select the evicted key-value pair and remove it

Because the evictionPoolPopulate function has updated the EvictionPoolLRU array, and the keys in this array are sorted in ascending order of idle time, the freeMemoryIfNeeded function will iterate through the EvictionPoolLRU array starting from the last key. If the selected key is not a null value, it will be chosen as the final evicted key.

The basic execution logic of this process is as follows:

for (k = EVPOOL_SIZE-1; k >= 0; k--) { // Start searching from the last key in the array
   if (pool[k].key == NULL) continue; // If the current key is null, continue to the next key
   
   ... // Get the key-value pair corresponding to the current key from the global hash table or the expire hash table and remove the current key from the EvictionPoolLRU array

    // If the key-value pair corresponding to the current key is not null, choose the current key as the evicted key
    if (de) {
      bestkey = dictGetKey(de);
      break;
     } else {} // Otherwise, continue searching for the next key
}

Finally, once the evicted key is selected, the freeMemoryIfNeeded function will perform synchronous deletion or asynchronous deletion based on the Redis server’s lazy-free configuration, as shown below:

if (bestkey) {
            db = server.db+bestdbid;
            robj *keyobj = createStringObject(bestkey,sdslen(bestkey));        // Send deletion information of the key to slaves and AOF file
            propagateExpire(db,keyobj,server.lazyfree_lazy_eviction);
            // If lazy eviction is configured, perform asynchronous deletion
            if (server.lazyfree_lazy_eviction)
                dbAsyncDelete(db,keyobj);
            else  // Otherwise, perform synchronous deletion
                dbSyncDelete(db,keyobj);
}

Okay, at this point, the freeMemoryIfNeeded function has evicted a key. If the freed memory is still not enough, meaning it hasn’t reached the required freed space mentioned earlier, the freeMemoryIfNeeded function will repeat the process of updating the candidate key-value pairs to be evicted and selecting the final evicted key until the required freed space is achieved.

The following diagram shows the basic flow involved in the freeMemoryIfNeeded function. You can review the overall process again.

Diagram

In fact, from the content I just introduced, you can see that the approximate LRU algorithm does not use a time-consuming and space-consuming linked list, but instead uses a fixed-size set of candidate data to be evicted and randomly selects some keys to add to this set each time. Finally, it deletes the key with the longest idle time according to the idle time of the keys in the set. In this way, Redis approximates the effect of the LRU algorithm.

Summary #

Alright, that’s all for today’s class. Let’s summarize.

In today’s class, I introduced to you how Redis implements the LRU algorithm for cache data eviction. According to the basic principles of the LRU algorithm, if we strictly implement it, the system would need additional memory space to store the LRU linked list, and the system’s runtime would also be affected by the overhead of LRU linked list operations.

For Redis, both memory resources and performance are important, so it implements an approximate LRU algorithm. To achieve this, Redis first sets a global LRU clock and obtains the global LRU clock value as the access timestamp when a key-value pair is created, as well as updating the access timestamp with the global LRU clock value on every access.

Then, whenever Redis processes a command, it calls the freeMemoryIfNeeded function to determine if memory needs to be released. If the used memory exceeds the maxmemory setting, the approximate LRU algorithm randomly selects some key-value pairs to form a set of candidates for eviction, and based on their access timestamps, selects the oldest data to evict.

In fact, by studying the content of this class, you can appreciate the basic principles of an algorithm and its actual execution. In system development, there will always be some trade-offs to consider, mainly due to the need to take into account the requirements of the developed system in terms of resources and performance, in order to avoid the resource and performance overhead that strict algorithm implementation might bring. Thus, this is a principle you should uphold when developing computer systems.

One Question for Each Lesson #

Now you know that the Redis source code provides the function getLRUClock() to calculate the global LRU clock value, and the LRU clock value for key-value pairs is obtained through the function LRU_CLOCK(). The following code shows the execution logic of the LRU_CLOCK() function. This function includes two branches: one branch directly retrieves the global clock value from the global variable server.lruclock, and the other branch calls the getLRUClock() function to obtain the global clock value.

So, do you know why the LRU clock value for key-value pairs is not obtained directly by calling the getLRUClock() function?

unsigned int LRU_CLOCK(void) {
    unsigned int lruclock;
    if (1000/server.hz <= LRU_CLOCK_RESOLUTION) {
        atomicGet(server.lruclock, lruclock);
    } else {
        lruclock = getLRUClock();
    }
    return lruclock;
}

Feel free to share your answer and thought process in the comments section. If you found it helpful, you can also share today’s content with more friends.