12 Is Redis Truly Single Threaded

12 Is Redis Truly Single-Threaded #

In today’s class, let’s discuss the execution model of Redis.

The execution model refers to the number of processes, subprocesses, and threads used by Redis at runtime, as well as the tasks they are responsible for.

When you use Redis in practice, you may often hear different statements such as “Redis is single-threaded,” “Redis’s main IO thread,” “Redis includes multiple threads,” and so on. I have also heard many students express confusion and doubts: Is Redis really a single-threaded program?

In fact, thoroughly understanding this question can help guide us in maintaining the high performance and low latency characteristics of Redis. If Redis is indeed a single-threaded program, then we need to avoid all operations that may cause thread blocking. However, if Redis is not just single-threaded, and there are other threads working, then we need to understand the tasks that each thread is responsible for. We need to know how many threads are responsible for request parsing and data reading and writing, and which operations are handled by background threads without affecting request parsing and data reading and writing.

Therefore, in today’s class, I will start with the processes running after the Redis server is started and guide you through the process of learning how subprocesses and threads are created in the Redis source code. At the same time, you will grasp the situation of processes, subprocesses, and threads involved during the runtime of the Redis server.

Now, let’s first look at the process running when the Redis server starts.

From executing shell commands to creating Redis processes #

When we start a Redis instance, we can execute the redis-server executable in a shell command line environment, as shown below:

./redis-server /etc/redis/redis.conf

After running this command in the shell, it actually calls the fork system call function to create a new process. Since the shell itself is a process, the new process created by fork is called the child process of the shell process, and the shell process is called the parent process. I will give you a specific introduction to the usage of the fork function in a moment.

Next, the shell process will call the execve system call function to replace the body of the child process with the Redis executable. The entry function of the Redis executable is the main function, so the child process will start executing the main function of the Redis server.

The following code shows the prototype of the execve system call function. Among them, filename is the name of the program to be run, argv[] and envp[] are the parameters and environment variables of the program to be run, respectively:

int execve(const char *filename, char *const argv[], char *const envp[])

The following diagram shows the process from executing the shell command to creating the Redis process:

When we run the Redis server with the shell command we just introduced, we will see that the log output after the Redis server starts will be printed on the terminal screen, as shown below:

37807:M 19 Aug 2021 07:29:36.372 # Server initialized
37807:M 19 Aug 2021 07:29:36.372 * DB loaded from disk: 0.000 seconds
37807:M 19 Aug 2021 07:29:36.372 * Ready to accept connections

This is because the child process created by the shell process calling the fork function inherits some properties from the parent process, such as the file descriptors opened by the parent process. For the shell process, the file descriptors it opened include 0 and 1, which represent standard input and standard output, respectively. The execve function only replaces the execution content of the child process with the Redis executable, and the standard input and standard output inherited by the child process from the shell parent process remain unchanged.

Therefore, the log information printed by Redis through the serverLog function will be output to the terminal screen by default, which is the standard output of the shell process.

Once the Redis process starts running, it will start executing from the main function. We have already learned about the main function’s main execution process in Lesson 8, so we will find that it calls different functions to perform related functions. For example, the main function calls the initServerConfig function to initialize the runtime parameters of the Redis server, and calls the loadServerConfig function to parse the configuration file parameters. When the main function calls these functions, these functions are still executed by the original process. So, in this case, Redis is still running as a single process.

However, after the main function completes the parameter parsing, it will set the value of the variable background based on two configuration parameters: daemonize and supervised. Their meanings are as follows:

  • The daemonize parameter indicates whether to set Redis to run as a daemon process.
  • The supervised parameter indicates whether to use the upstart or systemd management programs for daemons to manage Redis.

So, let’s learn more about daemon processes. Daemon processes are processes that run in the background of the system, independent of the shell terminal, and no longer require user input in the shell. Generally, daemon processes are used to perform periodic tasks or wait for certain events to occur before processing. Redis server itself waits for client input after it starts up before processing. So for server programs like Redis, we usually run it as a daemon process.

Okay, if Redis is set to run as a daemon process, how is the daemon process created? This is related to the daemonize function called by the main function. The daemonize function is used to transform the Redis process into a daemon process to run.

The following code shows the logic in the main function to determine whether to execute the daemonize function based on the value of the background variable:

// If the daemonize configuration parameter is 1 and the supervised value is 0, set the background value to 1; otherwise, set it to 0.
int main(int argc, char **argv) {
    ...
    int background = server.daemonize && !server.supervised;
    // If the background value is 1, call the daemonize function.
    if (background) daemonize();
    ...
}

That is to say, if the value of background is 1, it means that Redis is set to run as a daemon process, so the main function will call the daemonize function.

So, next, let’s learn how the daemonize function transforms Redis into a daemon process.

Learning about creating daemon processes from the execution of the daemonize function #

Let’s first take a look at part of the daemonize function’s execution, as shown below. We can see that the daemonize function calls the fork function and has different branches of code based on the return value of fork.

void daemonize(void) {

if (fork() != 0) exit(0); // Exit the parent process after forking successfully or failing
setsid(); // Create a new session

}

From the previous introduction, we already know that when we call the fork function in a program, it creates a child process. The process that the program originally corresponds to is referred to as the parent process of this child process. So, what is the relationship between the different branches after the fork function executes and the parent and child processes? This is related to how the fork function is used.

In reality, the use of the fork function is quite interesting. We can write corresponding branch code based on the different return values of the fork function. These branch codes correspond to the logic that the parent and child processes need to execute.

To help you understand, let me give you an example. I have written a sample code in which the main function calls the fork function and further branches based on whether the return value of fork is less than 0, equal to 0, or greater than 0. Note that the different return values of the fork function actually represent different meanings, specifically:

  • When the return value is less than 0, it indicates that the fork function execution failed.
  • When the return value is equal to 0, the corresponding code branch will run in the child process.
  • When the return value is greater than 0, the corresponding code branch will still run in the parent process.

Here is the sample code:

#include <stdio.h>
#include <unistd.h>
 
int main(int argc, char *argv[]) {
  printf("hello main\n");
    int rv = fork(); // The return value of the fork function
    // The return value is less than 0, indicating an error in fork execution
    if (rv < 0) {
        fprintf(stderr, "fork failed\n");
  }
  // The return value is equal to 0, corresponding to the code branch executed by the child process
    else if (rv == 0) {
        printf("I am child process %d\n", getpid());
  }
  // The return value is greater than 0, corresponding to the code branch executed by the parent process
    else {
        printf("I am parent process (%d), %d\n", rv, getpid());
    }
    return 0;
}

In this code, I wrote three branches of code based on the return value of the fork function. The code branch corresponding to a return value equal to 0 is the code executed by the child process. The child process will print the string “I am child process” and its process ID. The code branch corresponding to a return value greater than 0 is the code executed by the parent process. The parent process will print the string “I am parent process of” and the process ID of the child process it created, as well as its own process ID.

If you compile and execute this code, you will see results similar to the following: the parent process prints its process ID 62794, while the child process prints its process ID 62795. This indicates that the different branches in the previous example code are indeed executed by the parent and child processes, respectively. This means that after the fork function executes, we can use different branches to make the parent and child processes execute different content. Okay, now that we understand the knowledge of the fork function for creating child processes, let’s take a look at the daemonize function that was introduced earlier.

Now we know that after calling the fork function, the daemonize function can set different code branches based on the return value of the fork function, corresponding to the execution logic of the parent and child processes. In fact, the daemonize function does set two code branches.

  • Branch One

This branch corresponds to the condition where the return value of the fork function is not 0, indicating either the successful execution of the fork function by the parent process or the execution logic of the fork function failure. At this point, the parent process will call the exit(0) function to exit. In other words, if the fork function is executed successfully, the parent process will exit. Of course, if the fork function fails to execute, the child process will not be successfully created, and the parent process will also exit. You can see the code below, which shows this branch.

void daemonize(void) {
    ...
    if (fork() != 0) exit(0); // Parent process exits if fork succeeds or fails
    ...
}
  • Branch Two

This branch corresponds to the condition where the return value of the fork function is 0, indicating the execution logic of the child process. The child process first calls the setsid function to create a new session.

Then, the child process uses the open function to open the /dev/null device and redirects its standard input, standard output, and standard error output to the /dev/null device. Because the daemon process runs in the background, its input and output are independent of the shell terminal. Therefore, in order for Redis to run as a daemon process, these steps are performed to redirect the input and output of the current child process from the original shell terminal to the /dev/null device, so that it no longer relies on the shell terminal, meeting the requirements of a daemon process.

I have placed the code of the daemonize function here for you to see.

void daemonize(void) {
    ...
    setsid(); // Create a new session for the child process
    
    // Redirect the standard input, standard output, and standard error output of the child process to /dev/null
    if ((fd = open("/dev/null", O_RDWR, 0)) != -1) {
        dup2(fd, STDIN_FILENO);
        dup2(fd, STDOUT_FILENO);
        dup2(fd, STDERR_FILENO);
        if (fd > STDERR_FILENO) close(fd);
    }
}

Okay, up to this point, we have understood that the main function of Redis determines whether to run Redis as a daemon process based on the configuration parameters “daemonize” and “supervised”.

So, once Redis is to run as a daemon process, the main function will call the daemonize function. The daemonize function will further call the fork function to create a child process, and based on the return value, execute different code branches for the parent process and the child process. The parent process will exit, while the child process will replace the original parent process and continue to execute the code in the main function.

The following diagram shows the execution logic of the two branches after the daemonize function calls the fork function, you can review it again.

In fact, regardless of whether Redis is running as a daemon process or not, it is still a process running. For a process, if it does not create new threads after it starts, then the default work task of this process is executed by a single thread, which I generally refer to as the main thread.

For Redis, its main work, including receiving client requests, parsing requests, and performing data read and write operations, is not executed by creating new threads. Therefore, Redis does indeed mainly work with a single thread, which is why we often say that Redis is a single-threaded program. Because the main work of Redis is IO read and write operations, I also refer to this single thread as the main IO thread.

However, in fact, starting from Redis version 3.0, in addition to the main IO thread, Redis does start some background threads to handle some tasks in order to avoid these tasks affecting the main IO thread. So, where are these background threads started and how do they execute? This is related to the file bio.c in Redis. Next, let’s learn about Redis background threads from this file.

Learning Redis Background Threads from the bio.c file #

Let’s first take a look at the InitServerLast function called by the main function during the initialization process. The purpose of the InitServerLast function is to further call the bioInit function to create background threads, allowing Redis to delegate some tasks to the background threads. The process is shown below.

void InitServerLast() {
    bioInit();
    
}

The bioInit function is implemented in the bio.c file. Its main purpose is to create multiple background threads by calling the pthread_create function. However, before understanding the bioInit function in detail, let’s take a look at the main arrays defined in the bio.c file. These arrays also need to be initialized in the bioInit function.

The bio.c file defines the pthread_t type array bio_threads to store the thread descriptors of the created threads. In addition, the bio.c file also creates an array bio_mutex to store mutex locks, and two arrays bio_newjob_cond and bio_step_cond to store condition variables. The following code shows the logic of creating these arrays. Take a look.

// Array to store thread descriptors
static pthread_t bio_threads[BIO_NUM_OPS];
// Array to store mutex locks
static pthread_mutex_t bio_mutex[BIO_NUM_OPS];
// Two arrays to store condition variables
static pthread_cond_t bio_newjob_cond[BIO_NUM_OPS];
static pthread_cond_t bio_step_cond[BIO_NUM_OPS];

You can see that the size of these arrays is defined by the BIO_NUM_OPS macro, which is defined in the bio.h file with a default value of 3.

At the same time, in the bio.h file, you can also see three other macro definitions, namely BIO_CLOSE_FILE, BIO_AOF_FSYNC, and BIO_LAZY_FREE. Their code is shown below.

#define BIO_CLOSE_FILE    0 /* Deferred close(2) syscall. */
#define BIO_AOF_FSYNC    1 /* Deferred AOF fsync. */
#define BIO_LAZY_FREE     2 /* Deferred objects freeing. */
#define BIO_NUM_OPS       3

Among them, BIO_NUM_OPS represents the three types of Redis background tasks. BIO_CLOSE_FILE, BIO_AOF_FSYNC, and BIO_LAZY_FREE respectively represent the operation codes of the three background tasks, which can be used to identify different tasks.

  • BIO_CLOSE_FILE: Deferred close(2) syscall task.
  • BIO_AOF_FSYNC: Deferred AOF fsync task.
  • BIO_LAZY_FREE: Deferred objects freeing task.

In fact, the thread array, mutex lock array, and condition variable array created by the bio.c file all contain three elements, which correspond to these three tasks.

bioInit function: Array Initialization #

Next, let’s understand the bioInit function, which is the initialization and thread creation function in the bio.c file. As I mentioned earlier, it is called through the InitServerLast function after the main function completes the server initialization. In other words, after completing the server initialization, Redis will create threads to execute background tasks.

So from here, it can be seen that Redis has more than just a single thread (the main IO thread) running at runtime. There will also be background threads running. Now you can give an accurate answer to the question of whether Redis is single-threaded.

The bioInit function first initializes the mutex lock array and condition variable arrays. Then, the function calls the listCreate function to create a list for each element of the bio_jobs array and assigns 0 to each element of the bio_pending array. The code for this part is shown below.

for (j = 0; j < BIO_NUM_OPS; j++) {
    pthread_mutex_init(&bio_mutex[j], NULL);
    pthread_cond_init(&bio_newjob_cond[j], NULL);
    pthread_cond_init(&bio_step_cond[j], NULL);
    bio_jobs[j] = listCreate();
    bio_pending[j] = 0;
}

So, to understand the purpose of assigning values to the elements of the bio_jobs array and bio_pending array, we need to first understand the meaning of these two arrays:

  • The elements of the bio_jobs array are of type bio_job structure, used to represent background tasks. The structure contains member variables including the creation time time of the task, as well as the parameters of the task. For each element of this array, create a list, which is essentially a task list for each background thread.
  • The elements of the bio_pending array are of type unsigned long long, used to represent the number of pending tasks in each type of task. Initialize each element of this array to 0, which indicates that initially, there are no specific tasks pending for each type of task.

The following code shows the bio_job structure, as well as the definition of the bio_jobs and bio_pending arrays. Take a look:

struct bio_job {
    time_t time; // Task creation time
    void *arg1, *arg2, *arg3; // Task parameters
};

// List of tasks running as background threads
static list *bio_jobs[BIO_NUM_OPS];
// Array of blocked background tasks
static unsigned long long bio_pending[BIO_NUM_OPS];

Okay, so at this point, you now understand that when the bioInit function is executed, it initializes the thread mutex and condition variable arrays to NULL and creates a task list (corresponding to the elements of the bio_jobs array) for each background thread, as well as sets the number of pending tasks for each type of task to 0 (corresponding to the elements of the bio_pending array).

bioInit Function: Setting Thread Attributes and Creating Threads #

After the initialization is done, the bioInit function sets the thread attributes using a pthread_attr_t variable and creates the threads using the pthread_create function I mentioned earlier.

However, to better understand the process of setting thread attributes and creating threads in the bioInit function, we need to have some understanding of the pthread_create function itself. Here is its prototype:

int pthread_create(pthread_t *tidp, const pthread_attr_t *attr,
                   void *(*start_routine)(void *), void *arg);

As you can see, the pthread_create function has a total of 4 parameters:

  • tidp: A pointer to the pthread_t structure that holds thread-specific data.
  • attr: A pointer to the pthread_attr_t structure that holds the thread attributes.
  • start_routine: The starting address of the function that the thread will run, which is also a pointer to the function.
  • arg: The argument passed to the running function.

Now that we understand the pthread_create function, let’s take a look at the specific operations in the bioInit function.

First, the bioInit function calls pthread_attr_init to initialize the thread attribute variable attr. Then it calls pthread_attr_getstacksize to retrieve the current value of the stack size attribute and calculates the final stack size value based on the current stack size and the size defined by the REDIS_THREAD_STACK_SIZE macro (default value is 4MB). Next, the bioInit function calls pthread_attr_setstacksize to set the stack size attribute.

The following code shows the logic for retrieving, calculating, and setting the stack size attribute. Take a look:

pthread_attr_init(&attr);
pthread_attr_getstacksize(&attr, &stacksize);
if (!stacksize) stacksize = 1; // Handle for Solaris system
while (stacksize < REDIS_THREAD_STACK_SIZE) stacksize *= 2;
pthread_attr_setstacksize(&attr, stacksize);

I’ve also created a diagram showing the process of setting thread attributes. Take a look:

After setting the thread attributes, the bioInit function proceeds with a for loop to create a thread for each type of background task. The loop runs for 3 iterations, as determined by the BIO_NUM_OPS macro. Accordingly, the bioInit function calls pthread_create 3 times to create 3 threads. The bioInit function makes these 3 threads execute the function bioProcessBackgroundJobs.

However, it’s worth noting that during this thread creation process, the function is passed arguments of 0, 1, and 2. Here’s how these threads are created:

for (j = 0; j < BIO_NUM_OPS; j++) {
    void *arg = (void *)(unsigned long)j;
        if (pthread_create(&thread, &attr, bioProcessBackgroundJobs, arg) != 0) {
            // Error message
        }
        bio_threads[j] = thread;
    }

After looking at this code, you might have a small question: Why are the three threads created running the bioProcessBackgroundJobs function with arguments 0, 1, and 2?

This is related to the implementation of the bioProcessBackgroundJobs function. Let’s take a closer look.

bioProcessBackgroundJobs Function: Processing Background Jobs #

First, the bioProcessBackgroundJobs function converts the received argument arg to an unsigned long type and assigns it to the type variable, as shown below:

void *bioProcessBackgroundJobs(void *arg) {
    ...
  unsigned long type = (unsigned long) arg;
  ...
}

The type variable represents the operation code of the background job. This is related to the three operation codes BIO_CLOSE_FILE, BIO_AOF_FSYNC, and BIO_LAZY_FREE that I mentioned earlier. Their values are 0, 1, and 2, respectively.

The main logic of the bioProcessBackgroundJobs function is a while(1) loop. In this loop, the function retrieves the corresponding task from the bio_jobs array based on the task type and calls the specific function to execute it.

As I mentioned earlier, each element in the bio_jobs array is a queue. Since the number of elements in the bio_jobs array is equal to the number of background job types (i.e., BIO_NUM_OPS), each element in the bio_jobs array actually corresponds to a task queue for a specific type of background job.

Once we understand this, it is easy to understand the while loop in the bioProcessBackgroundJobs function. Since the parameters passed to the bioProcessBackgroundJobs function are 0, 1, and 2, corresponding to three types of task types, the function will continuously retrieve a task from a specific task queue in this loop.

At the same time, the bioProcessBackgroundJobs function will call the appropriate function based on the task operation type. Specifically:

  • If the task type is BIO_CLOSE_FILE, the close function is called.
  • If the task type is BIO_AOF_FSYNC, the redis_fsync function is called.
  • If the task type is BIO_LAZY_FREE, the lazyfreeFreeObjectFromBioThread, lazyfreeFreeDatabaseFromBioThread, or lazyfreeFreeSlotsMapFromBioThread function is called depending on the number of arguments.

Finally, when a task is completed, the bioProcessBackgroundJobs function removes the corresponding data structure associated with that task from the task queue. I have included this code here for you to see.

while(1) {
        listNode *ln;
 
        ...
        // Get the first task from the task queue of type 'type'
        ln = listFirst(bio_jobs[type]);
        job = ln->value;
    
        ...
        // Determine which type of background job is being processed
        if (type == BIO_CLOSE_FILE) {
            close((long)job->arg1);  // If it is a close file task, call the close function
        } else if (type == BIO_AOF_FSYNC) {
            redis_fsync((long)job->arg1); // If it is an AOF sync write task, call the redis_fsync function
        } else if (type == BIO_LAZY_FREE) {
            // If it is a lazy free task, call different lazy free functions based on the task's arguments
            if (job->arg1)
                lazyfreeFreeObjectFromBioThread(job->arg1);
            else if (job->arg2 && job->arg3)
                lazyfreeFreeDatabaseFromBioThread(job->arg2, job->arg3);
            else if (job->arg3)
                lazyfreeFreeSlotsMapFromBioThread(job->arg3);
        } else {
serverPanic("Wrong job type in bioProcessBackgroundJobs().");
}

...
// After the task is completed, call listDelNode to delete the task from the task queue
listDelNode(bio_jobs[type], ln);
// Decrease the number of waiting tasks accordingly.
bio_pending[type]--;
...

Therefore, the bioInit function actually creates three threads that continuously check if there are tasks in the task queue. If there is a task, it calls the specific function to execute.

You can refer to the basic processing flow of the bioInit function and the bioProcessBackgroundJobs function shown in the figure below.

bioInit and bioProcessBackgroundJobs

However, you may still wonder: if the bioProcessBackgroundJobs function is responsible for executing tasks, which function is responsible for generating tasks?

This leads to my next introduction to the background task creation function bioCreateBackgroundJob.

bioCreateBackgroundJob Function: Creating Background Tasks #

The prototype of the bioCreateBackgroundJob function is as follows. It receives four parameters, where the type parameter represents the type of the background task, and the remaining three parameters correspond to the parameters of the background task function, as shown below:

void bioCreateBackgroundJob(int type, void *arg1, void *arg2, void *arg3)

When the bioCreateBackgroundJob function is executed, it first creates a bio_job, which is the data structure corresponding to the background task. Then, the parameters in the background task data structure are set to the arg1, arg2, and arg3 parameters passed in the bioCreateBackgroundJob function.

Finally, the bioCreateBackgroundJob function calls the listAddNodeTail function to add the created task to the corresponding bio_jobs queue, and increases the corresponding value in the bio_pending array by 1, indicating that there is a task waiting to be executed.

{
    // Create a new task
    struct bio_job *job = zmalloc(sizeof(*job));
    // Set the parameters in the task data structure
    job->time = time(NULL);
    job->arg1 = arg1;
    job->arg2 = arg2;
    job->arg3 = arg3;
    pthread_mutex_lock(&bio_mutex[type]);
    listAddNodeTail(bio_jobs[type], job);  // Add the task to the corresponding task list array in bio_jobs
    bio_pending[type]++; // Increase the number of waiting tasks on the corresponding task list by 1
    pthread_cond_signal(&bio_newjob_cond[type]);
    pthread_mutex_unlock(&bio_mutex[type]);
}

In this way, when the Redis process wants to start a background task, it simply calls the bioCreateBackgroundJob function and sets the type and parameters of the task. Then, the bioCreateBackgroundJob function will put the created task data structure into the queue corresponding to the background task. On the other hand, the threads created by the bioInit function in the Redis server will continuously poll the background task queues. Once a task is found to be executable, it will be taken out and executed.

In fact, this design pattern is a typical producer-consumer model. The bioCreateBackgroundJob function is the producer, responsible for adding the tasks to be executed to each task queue. The bioProcessBackgroundJobs function is the consumer, responsible for taking out the tasks from each task queue and executing them. The background threads created by Redis will call the bioProcessBackgroundJobs function to continuously check the task queues.

The following figure shows the producer-consumer model between bioCreateBackgroundJob and bioProcessBackgroundJobs. You can take a look.

bioCreateBackgroundJob and bioProcessBackgroundJobs

So far, we have learned about the mechanism of creating and running background threads in Redis. In summary, the three key points are as follows:

  • Redis initializes and creates background threads through the bioInit function.
  • The background threads run the bioProcessBackgroundJobs function, which polls the task queues and calls the corresponding functions to process the tasks based on their types.
  • The tasks to be processed by the background threads are created by the bioCreateBackgroundJob function, and these tasks are put into the task queues, waiting to be processed by the bioProcessBackgroundJobs function.

Summary #

In today’s class, I introduced you to the execution model of Redis and analyzed its source code to help you understand the process creation, daemonization as a child process, and the background threads and their respective tasks. This also answers the frequently asked question in interviews: Is Redis a single-threaded program?

In fact, after the Redis server is started, its main tasks, such as receiving client requests, parsing requests, and performing data read and write operations, are executed by a single thread, which is why we often say that Redis is a single-threaded program.

However, after completing this class, you should also know that Redis starts three additional threads to perform operations such as file closing, AOF synchronization writing, and lazy deletion. From this perspective, Redis cannot be considered a single-threaded program, as it also has multiple threads. Moreover, in the next class, I will introduce the implementation of multiple IO threads in Redis 6.0. From the perspective of multiple IO threads, Redis cannot be called a single-threaded program either.

In addition, after completing this class, you should pay special attention to the key concepts of using the fork function and the producer-consumer model.

Firstly, the use of the fork function. The fork function can create a child process while a process is running. When Redis is configured to run as a daemon process, the main function of Redis calls the fork function to create a child process, which then runs as a daemon process, while the initially started parent process exits. Since the child process inherits the code from the parent process, the execution logic in the main function is passed on to the child process.

Secondly, the producer-consumer model. Redis creates background threads in the bio.c and bio.h files and implements the execution of background tasks. You should pay close attention to the producer-consumer execution model used here, as it is the core design concept behind the execution of background tasks in bio.c. Moreover, when you need to implement asynchronous task execution, the producer-consumer model is a good solution. You can learn about the implementation approach of this model from the Redis source code.

One Question per Lesson #

Redis backend tasks use the bio_job structure to describe the task. This structure uses three pointer variables to represent the task parameters, as shown below. If the task we create requires more than three parameters, what methods do you have to pass parameters?

struct bio_job {
    time_t time;
    void *arg1, *arg2, *arg3;  // Parameters passed to the task
};

Please share your answer and thought process in the comments. If you find it helpful, feel free to share today’s content with more friends.