13 Did 60+ IO Threads Improve Redis Performance #
From the previous lesson, we learned that after the Redis server starts, it executes client request parsing and processing in a single-threaded manner. However, Redis server also starts three background threads through the bioInit function to handle background tasks. In other words, Redis no longer lets the main thread perform time-consuming operations such as synchronous writes and deletions. Instead, it delegates these tasks to background threads to be completed asynchronously, thus avoiding blocking the main thread.
In fact, in the Redis 6.0 version released in May 2020, Redis further uses multiple threads to handle IO tasks in its execution model. The purpose of this design is to fully utilize the multi-core capabilities of the current server, run multiple threads using multiple cores, and allow these threads to help accelerate data reading, command parsing, and data write-back, improving the overall performance of Redis.
So, when are these multiple threads started and how are they used to process IO requests?
In today’s lesson, I will introduce to you the multi-IO thread mechanism implemented in Redis 6.0. By learning this part, you can fully understand how Redis 6.0 improves the efficiency of IO request processing through multi-threading. This way, you can evaluate whether you need to use Redis 6.0 based on your actual business needs.
Alright, let’s first take a look at the initialization of the multiple IO threads. Note that because we read the code of Redis version 5.0.8 in the previous lesson, before starting today’s lesson, you need to download the source code of Redis 6.0.15 so that you can see the code related to the multi-IO thread mechanism.
Initialization of Multiple IO Threads #
As I introduced in the previous lecture, the three background threads in Redis 5.0 are invoked by the InitServerLast
function at the end of the server’s initialization process. After calling bioInit
in InitServerLast
, Redis 6.0 additionally invokes the initThreadedIO
function. The initThreadedIO
function is used to initialize multiple IO threads, and its invocation is as follows:
void InitServerLast() {
bioInit();
initThreadedIO(); // Calling initThreadedIO function to initialize IO threads
set_jemalloc_bg_thread(server.jemalloc_bg_thread);
server.initial_memory_usage = zmalloc_used_memory();
}
Now, let’s take a closer look at the execution flow of the initThreadedIO
function. This function is implemented in the networking.c
file.
First, the initThreadedIO
function sets the activation flag of the IO threads. This flag is stored in the global variable io_threads_active
of the redisServer
structure, which is a member variable of the server
global variable. The io_threads_active
is initialized to 0 to indicate that the IO threads are not yet activated. The code for this part is as follows:
void initThreadedIO(void) {
server.io_threads_active = 0;
...
}
Here, it’s worth noting that the aforementioned server
is a structure variable that is used to store various global information during the runtime of Redis server. In Lecture 8, when I introduced the initialization process of Redis server, I mentioned that all the configuration parameters are stored in this server
global variable. Therefore, when reading the Redis source code and encountering the server
variable in a function, it refers to this global variable.
Next, the initThreadedIO
function checks the number of IO threads that have been set. This number is stored in the io_threads_num
member variable of the server
global variable. The number of IO threads can have three possible results.
The first case is when the number of IO threads is 1, which means there is only one main IO thread. In this case, the initThreadedIO
function simply returns, and the IO thread behavior remains the same as in Redis 6.0 and earlier versions.
if (server.io_threads_num == 1) return;
The second case is when the number of IO threads is greater than the defined maximum value IO_THREADS_MAX_NUM
(default value is 128). In this case, the initThreadedIO
function throws an error and exits the entire program.
if (server.io_threads_num > IO_THREADS_MAX_NUM) {
// Error log recording
exit(1); // Exit the program
}
The third case is when the number of IO threads is greater than 1 and less than IO_THREADS_MAX_NUM
. In this case, the initThreadedIO
function enters a loop, and the number of iterations in this loop is equal to the number of IO threads that have been set.
In this loop, the initThreadedIO
function initializes four arrays:
io_threads_list
array: It stores the clients to be handled by each IO thread, with each element of the array initialized as a list.io_threads_pending
array: It stores the number of clients waiting to be handled by each IO thread.io_threads_mutex
array: It stores the mutex locks of the threads.io_threads
array: It stores the descriptors of each IO thread.
The definitions of these four arrays are found in the networking.c
file as follows:
pthread_t io_threads[IO_THREADS_MAX_NUM]; // Array to store thread descriptors
pthread_mutex_t io_threads_mutex[IO_THREADS_MAX_NUM]; // Array to store thread mutex locks
_Atomic unsigned long io_threads_pending[IO_THREADS_MAX_NUM]; // Array to store the number of pending clients for each thread
list *io_threads_list[IO_THREADS_MAX_NUM]; // Array to store clients for each thread
In addition to initializing these arrays, the initThreadedIO
function also calls pthread_create
to create a corresponding number of threads based on the number of IO threads. As I mentioned in the previous lecture, the pthread_create
function takes parameters including the function to be run by the created thread and its parameters (tidp, attr, start_routine, arg
).
Therefore, for the initThreadedIO
function, the function to be run by the created threads is IOThreadMain
, and the parameter is the current thread’s number. However, it’s important to note that this number starts from 1, where the thread with number 0 is actually the main IO thread running the Redis server’s main process.
The following code demonstrates the initialization of the arrays and the creation of IO threads in the initThreadedIO
function:
for (int i = 0; i < server.io_threads_num; i++) {
io_threads_list[i] = listCreate();
if (i == 0) continue; // The thread with number 0 is the main IO thread
pthread_t tid;
pthread_mutex_init(&io_threads_mutex[i], NULL); // Initialize the io_threads_mutex array
io_threads_pending[i] = 0; // Initialize the io_threads_pending array
pthread_mutex_lock(&io_threads_mutex[i]);
// Call pthread_create to create IO threads, with IOThreadMain as the thread's running function
if (pthread_create(&tid, NULL, IOThreadMain, (void*)(long)i) != 0) {
// Error handling
}
io_threads[i] = tid; // Initialize the io_threads array with the thread identifier
}
Now, let’s take a look at the IOThreadMain
function, which is the function that IO threads will run after being started. Understanding this function will help us grasp the actual work performed by the IO threads.
The running function of the IO thread - IOThreadMain #
The IOThreadMain function is also defined in the networking.c file. Its main execution logic is a while(1) loop. In this loop, the IOThreadMain function reads the list corresponding to each IO thread from the io_threads_list array.
As I mentioned before, the io_threads_list array uses a list to record the clients that each IO thread needs to process. Therefore, the IOThreadMain function will further retrieve the clients to be processed from the list corresponding to each IO thread, and then determine the operation flag that the thread needs to execute. This operation flag is represented by the io_threads_op variable, which can have two values.
- The value of io_threads_op is the macro definition IO_THREADS_OP_WRITE: This indicates that the IO thread needs to perform a write operation, and the thread will call the writeToClient function to write data back to the client.
- The value of io_threads_op is the macro definition IO_THREADS_OP_READ: This indicates that the IO thread needs to perform a read operation, and the thread will call the readQueryFromClient function to read data from the client.
You can see the code logic for this part below.
void *IOThreadMain(void *myid) {
...
while(1) {
listIter li;
listNode *ln;
// Get the list of clients that the IO thread needs to process
listRewind(io_threads_list[id],&li);
while((ln = listNext(&li))) {
client *c = listNodeValue(ln); // Get one client from the client list
if (io_threads_op == IO_THREADS_OP_WRITE) {
writeToClient(c,0); // If the thread operation is a write operation, call writeToClient to write data back to the client
} else if (io_threads_op == IO_THREADS_OP_READ) {
readQueryFromClient(c->conn); // If the thread operation is a read operation, call readQueryFromClient to read data from the client
} else {
serverPanic("io_threads_op value is unknown");
}
}
listEmpty(io_threads_list[id]); // After processing all clients, empty the client list of this thread
io_threads_pending[id] = 0; // Set the number of pending tasks for this thread to 0
}
}
I have also created the following diagram to show the basic flow of the IOThreadMain function.
So far, you should understand that when each IO thread is running, it constantly checks if there are clients waiting for it to process. If there are, it reads or writes data to/from the client based on the operation type. As you can see, these operations are all IO operations that Redis needs to perform with the client, hence the reason why we call these threads IO threads.
Now, you may have a question: How are the clients that the IO thread needs to process added to the io_threads_list array?
This brings us to the global variable of the Redis server, server. The server variable has two List-type member variables: clients_pending_write and clients_pending_read, which respectively record the clients waiting to write data and the clients waiting to read data, as shown below:
struct redisServer {
...
list *clients_pending_write; // Clients waiting to write data
list *clients_pending_read; // Clients waiting to read data
...
}
You should know that during the process of receiving client requests and returning data to clients, Redis may defer client read and write operations based on certain conditions, and save the clients waiting to be read or written to these two lists. Then, before entering the event loop each time, Redis adds the clients in these lists to the io_threads_list array and hands them over to the IO threads for processing.
So next, let’s first see how Redis defers client read and write operations and adds these clients to the clients_pending_write and clients_pending_read lists.
How to delay client read operations? #
After establishing a connection with a client, a Redis server starts listening for readable events on that client. The callback function that handles the readable events is readQueryFromClient
, which I introduced to you in Lecture 11.
Now, let’s take a look at the readQueryFromClient
function in Redis 6.0. Firstly, it retrieves the client c
from the connection structure conn
. Then, it calls the postponeClientRead
function to determine whether to delay reading data from the client. The execution logic is as follows:
void readQueryFromClient(connection *conn) {
client *c = connGetPrivateData(conn); // Retrieve the client from the connection data structure
...
if (postponeClientRead(c)) return; // Determine whether to delay reading data from the client
...
}
Now, let’s take a look at the execution logic of the postponeClientRead
function. This function determines whether to delay reading data from the client based on four conditions.
Condition 1: The value of the global variable io_threads_active
in server
is 1
This indicates that the multi-IO threads are active. As I mentioned earlier, this variable is initialized to 0 in the initThreadedIO
function, which means that the multi-IO threads are not active by default (I will explain when this variable is set to 1 later).
Condition 2: The value of the global variable io_threads_do_read
in server
is 1
This indicates that the multi-IO threads can be used to handle delayed client read operations. The value of this variable is set in the Redis configuration file redis.conf
through the io-threads-do-reads
option. The default value is “no”, which means that the multi-IO thread mechanism is not used for client read operations by default. So, if you want to use the multi-IO threads to handle client read operations, you need to set the io-threads-do-reads
option to “yes”.
Condition 3: The value of the ProcessingEventsWhileBlocked
variable is 0
This indicates that the processEventsWhileBlocked
function is not being executed. ProcessingEventsWhileBlocked
is a global variable that is set to 1 when the processEventsWhileBlocked
function is executed, and it is set back to 0 when the processEventsWhileBlocked
function completes execution.
The processEventsWhileBlocked
function is implemented in the networking.c file. This function is called when Redis is reading an RDB file or an AOF file to handle events captured by the event-driven framework. This avoids blocking Redis during the process of reading RDB or AOF files, which would prevent timely event processing. Therefore, when the processEventsWhileBlocked
function is processing client readable events, these client read operations are not delayed.
Condition 4: The client must not have the CLIENT_MASTER, CLIENT_SLAVE, or CLIENT_PENDING_READ flags
The CLIENT_MASTER and CLIENT_SLAVE flags respectively indicate that the client is used for master-slave replication, which means that these clients do not delay read operations. The CLIENT_PENDING_READ flag itself indicates that a client has already been set to postpone read operations. So, for clients that already have the CLIENT_PENDING_READ flag, the postponeClientRead
function will not delay their read operations anymore.
In summary, the postponeClientRead
function will only delay the read operation of the current client if all four conditions mentioned above are satisfied. Specifically, the postponeClientRead
function sets the CLIENT_PENDING_READ flag for the client and calls the listAddNodeHead
function to add the client to the clients_pending_read
list, which is a global variable in server
.
Here is the code for the postponeClientRead
function:
int postponeClientRead(client *c) {
// Check if IO threads are active
if (server.io_threads_active && server.io_threads_do_reads &&
!ProcessingEventsWhileBlocked &&
!(c->flags & (CLIENT_MASTER|CLIENT_SLAVE|CLIENT_PENDING_READ)))
{
c->flags |= CLIENT_PENDING_READ; // Set the CLIENT_PENDING_READ flag for the client to delay the read operation
listAddNodeHead(server.clients_pending_read,c); // Add the client to the clients_pending_read list
return 1;
} else {
return 0;
}
}
In conclusion, Redis checks and delays client read operations in the client read event callback function readQueryFromClient
by calling the postponeClientRead
function. Next, let’s take a look at how Redis delays client write operations.
How to postpone client write operations? #
When Redis executes a client command and needs to return the result to the client, it calls the addReply
function to write the result to the client output buffer.
At the beginning of the addReply
function, it calls the prepareClientToWrite
function to determine whether to postpone the execution of the client write operation. The following code shows the call to prepareClientToWrite
function in the addReply
function.
void addReply(client *c, robj *obj) {
if (prepareClientToWrite(c) != C_OK) return;
...
}
Now let’s take a look at the prepareClientToWrite
function. This function performs a series of checks based on the flags set by the client. Among them, this function calls the clientHasPendingReplies
function to check if the client still has data waiting to be written back in the output buffer.
If there is no pending data, prepareClientToWrite
calls the clientInstallWriteHandler
function to further determine whether the client write operation can be postponed. The following code shows the process of this function call.
int prepareClientToWrite(client *c) {
...
// If there is no pending data for the client, call clientInstallWriteHandler function
if (!clientHasPendingReplies(c)) clientInstallWriteHandler(c);
return C_OK;
}
Therefore, we can conclude that whether the client write operation can be postponed is ultimately determined by the clientInstallWriteHandler
function. This function checks two conditions:
- The client has not been flagged with
CLIENT_PENDING_WRITE
, indicating that the write operation has not been postponed before. - The client’s instance is not performing master-slave replication, or the client’s instance is a slave node in master-slave replication and the full replication RDB file has been transferred and the client is ready to receive requests.
Once both conditions are met, the clientInstallWriteHandler
function sets the client’s flag to CLIENT_PENDING_WRITE
, indicating that the client’s write operation is postponed. At the same time, the clientInstallWriteHandler
function adds the client to the clients_pending_write
list, a global variable in the server
structure.
void clientInstallWriteHandler(client *c) {
// If the client is not flagged as CLIENT_PENDING_WRITE and the client is not performing master-slave replication,
// or the client is a slave node in master-slave replication and it is ready to receive requests
if (!(c->flags & CLIENT_PENDING_WRITE) &&
(c->replstate == REPL_STATE_NONE ||
(c->replstate == SLAVE_STATE_ONLINE && !c->repl_put_online_on_ack)))
{
// Set the client flag to pending write, i.e., CLIENT_PENDING_WRITE
c->flags |= CLIENT_PENDING_WRITE;
listAddNodeHead(server.clients_pending_write,c); // Add the client to clients_pending_write list
}
}
To help you better understand, I have created a diagram showing the function call relationship for postponing client write operations in Redis. You can refer to it.
However, once Redis uses the clients_pending_read
and clients_pending_write
lists to store the postponed clients, how are these clients assigned to multiple IO threads for execution? This is related to the following two functions.
handleClientsWithPendingReadsUsingThreads
: This function is responsible for assigning clients from theclients_pending_read
list to IO threads for processing.handleClientsWithPendingWritesUsingThreads
: This function is responsible for assigning clients from theclients_pending_write
list to IO threads for processing.
Next, let’s take a closer look at the operations of these two functions.
How to assign pending read clients to IO threads for execution? #
First, let’s understand the function handleClientsWithPendingReadsUsingThreads
. This function is called in the beforeSleep
function.
In the Redis 6.0 code, the event-driven framework also calls the aeMain
function to execute the event loop process, which calls the aeProcessEvents
function to handle various events before the aeApiPoll
function is called to capture IO events. And before calling the aeApiPoll
function to actually capture IO events, the beforeSleep
function is called.
The process is shown in the following figure, which you can refer to:
The main logic of the handleClientsWithPendingReadsUsingThreads
function can be divided into four steps.
In the first step, this function first checks whether the IO thread is active according to the io_threads_active
member variable of the global variable server
, and checks whether the user has set Redis to handle pending read clients with IO threads according to the io_threads_do_reads
member variable of server
. Only when the IO thread is active and the IO thread can be used to handle pending read clients, will the handleClientsWithPendingReadsUsingThreads
function continue to execute. Otherwise, the function will directly return. The logic of this step is shown in the following code:
if (!server.io_threads_active || !server.io_threads_do_reads)
return 0;
In the second step, the handleClientsWithPendingReadsUsingThreads
function gets the length of the clients_pending_read
list, which represents the number of pending read clients to be processed. Then, the function retrieves the pending clients to be processed from the clients_pending_read
list one by one, and uses the client’s position in the list to perform modulo operations with the number of IO threads.
In this way, we can allocate the client to the corresponding IO thread for processing based on the remainder obtained from the modulo operation. Next, the handleClientsWithPendingReadsUsingThreads
function calls the listAddNodeTail
function to add the allocated client to the corresponding element of the io_threads_list
list. As I mentioned earlier, each element of the io_threads_list
array is a list that stores the clients to be processed by each IO thread.
To help you understand, let me give you an example.
Assume that the number of IO threads is set to 3, and there are a total of 5 pending read clients in the clients_pending_read
list, with the positions in the list being 0, 1, 2, 3, and 4. In this step, the modulo results of the client positions 0 to 4 modulo the number of threads 3 are 0, 1, 2, 0, 1, which correspond to the IO thread numbers that will process these clients. This means that client 0 is processed by thread 0, client 1 is processed by thread 1, and so on. As you can see, this allocation method actually assigns the pending clients to IO threads in a round-robin manner.
I have drawn the following diagram to show the allocation results. You can take a look:
The following code demonstrates the execution logic of allocating clients to IO threads in a round-robin manner:
int processed = listLength(server.clients_pending_read);
listRewind(server.clients_pending_read, &li);
int item_id = 0;
while ((ln = listNext(&li))) {
client *c = listNodeValue(ln);
int target_id = item_id % server.io_threads_num;
listAddNodeTail(io_threads_list[target_id], c);
item_id++;
}
After handleClientsWithPendingReadsUsingThreads
completes the allocation of client’s IO threads, it sets the IO thread operation flag to read operation, which is IO_THREADS_OP_READ
. Then, it iterates over the length of each element list in the io_threads_list
array, assigns the number of clients each thread will process to the io_threads_pending
array. This process is shown below:
io_threads_op = IO_THREADS_OP_READ;
for (int j = 1; j < server.io_threads_num; j++) {
int count = listLength(io_threads_list[j]);
io_threads_pending[j] = count;
}
In the third step, the handleClientsWithPendingReadsUsingThreads
function retrieves the pending read clients from the io_threads_list
array element 0 list (io_threads_list[0]
) one by one, and calls the readQueryFromClient
function to process them.
Actually, the handleClientsWithPendingReadsUsingThreads
function itself is executed by the main IO thread, and the 0th element in the io_threads_list
array corresponds to the main IO thread. So here, the main IO thread is responsible for processing its pending read clients.
listRewind(io_threads_list[0], &li); // Get all clients in the 0th list
while ((ln = listNext(&li))) {
client *c = listNodeValue(ln);
readQueryFromClient(c->conn);
}
listEmpty(io_threads_list[0]); // After processing, empty the 0th list
Next, the handleClientsWithPendingReadsUsingThreads
function enters a while(1)
loop and waits for all IO threads to complete processing their pending read clients, as shown below:
while (1) {
unsigned long pending = 0;
for (int j = 1; j < server.io_threads_num; j++)
pending += io_threads_pending[j];
if (pending == 0) break;
}
In the fourth step, the handleClientsWithPendingReadsUsingThreads
function traverses the clients_pending_read
list again, retrieves each client, and then checks if the client’s flags contain CLIENT_PENDING_COMMAND
. If the CLIENT_PENDING_COMMAND
flag is present, it means that the command in the client has been parsed by an IO thread and can be executed.
At this point, the handleClientsWithPendingReadsUsingThreads
function calls the processCommandAndResetClient
function to execute the command. Finally, it directly calls the processInputBuffer
function to parse and execute all commands in the client.
The code logic for this part is shown below. Please take a look:
while (listLength(server.clients_pending_read)) {
ln = listFirst(server.clients_pending_read);
client *c = listNodeValue(ln);
...
// If the command has been parsed, execute it
if (c->flags & CLIENT_PENDING_COMMAND) {
c->flags &= ~CLIENT_PENDING_COMMAND;
if (processCommandAndResetClient(c) == C_ERR) {
continue;
}
}
// Parse and execute all commands
processInputBuffer(c);
}
Alright, at this point, you have learned how the pending read clients in the clients_pending_read
list are assigned to IO threads for processing using the four steps outlined above. The following figure shows the main process, which you can review:
Next, let’s take a look at how pending write clients are assigned and processed.
How to assign pending write clients to IO threads for execution? #
Similar to the allocation and handling of pending read clients, the allocation and handling of pending write clients is done by the handleClientsWithPendingWritesUsingThreads function. This function is also called within the beforeSleep function.
The main process of handleClientsWithPendingWritesUsingThreads can also be divided into 4 steps, where the execution logic of steps 2, 3, and 4 is similar to the handleClientsWithPendingReadsUsingThreads function.
In step 2, handleClientsWithPendingWritesUsingThreads function assigns the pending write clients to the IO threads in a round-robin manner and adds them to the io_threads_list array.
Then, in step 3, handleClientsWithPendingWritesUsingThreads function allows the main IO thread to handle its pending write clients and executes a while(1) loop to wait for all IO threads to finish processing.
In step 4, handleClientsWithPendingWritesUsingThreads function checks again if there are any pending write clients in the clients_pending_write list and if these clients still have data remaining in the buffer. If there are, handleClientsWithPendingWritesUsingThreads function calls the connSetWriteHandler function to register a writable event, with the callback function for this event being the sendReplyToClient function.
When the event loop cycle is processed again, the writable event registered by the handleClientsWithPendingWritesUsingThreads function will be handled, and then the sendReplyToClient function will execute. It will directly call the writeToClient function to write the data from the client’s buffer back.
Here, what you need to pay attention to is that the connSetWriteHandler function ultimately maps to the connSocketSetWriteHandler function, which is implemented in the connection.c file. The connSocketSetWriteHandler function calls the aeCreateFileEvent function to create the AE_WRITABLE event, which is the registration of the writable event mentioned earlier (you can also review Lesson 11 about the use of the aeCreateFileEvent function).
However, unlike the handleClientsWithPendingReadsUsingThreads function, in step 1, the handleClientsWithPendingWritesUsingThreads function checks whether the number of IO threads is 1 or if the number of pending write clients is less than twice the number of IO threads.
If either of these conditions is met, the handleClientsWithPendingWritesUsingThreads function will not use multiple threads to handle clients, but will instead call the handleClientsWithPendingWrites function to directly handle the pending write clients by the main IO thread. The main purpose of doing this is to save CPU overhead when there are not many pending write clients.
The conditional logic for this step is as follows. The stopThreadedIOIfNeeded function is mainly used to determine whether the number of pending write clients is insufficient as twice the number of IO threads.
if (server.io_threads_num == 1 || stopThreadedIOIfNeeded()) {
return handleClientsWithPendingWrites();
}
In addition, in step 1, the handleClientsWithPendingWritesUsingThreads function also checks whether the IO threads are already active. If not, it calls the startThreadedIO function to set the value of the io_threads_active member variable of the global variable server to 1, indicating that the IO threads are active. This conditional check is as follows:
if (!server.io_threads_active) startThreadedIO();
In summary, Redis assigns pending write clients to IO threads in a round-robin manner and delegates them to handle the writing of data through the handleClientsWithPendingWritesUsingThreads function.
Summary #
In today’s class, I introduced you to the multi IO thread mechanism newly designed and implemented in Redis 6.0. The design of this mechanism is mainly for utilizing multiple IO threads to concurrently handle client data reading, command parsing, and data writing. By using multiple threads, Redis can fully utilize the multi-core features of the server, thus improving IO efficiency.
In summary, Redis 6.0 first creates the corresponding number of IO threads during the initialization process, based on the number of IO threads set by the user.
When the Redis server is running normally after initialization, it will decide whether to postpone client read operations by calling the postponeClientRead function in the readQueryFromClient function. At the same time, the Redis server will decide whether to postpone client write operations by calling the prepareClientToWrite function in the addReply function. The clients to be read and written will be added to the clients_pending_read and clients_pending_write lists respectively.
In this way, every time the Redis server enters the event loop process, the handleClientsWithPendingReadsUsingThreads function and the handleClientsWithPendingWritesUsingThreads function will be called in the beforeSleep function to allocate the clients to be read and written to the IO threads in a round-robin manner and add them to the IO threads’ pending client list, io_threads_list.
Once the IO threads are running, they will continuously monitor the clients in the io_threads_list. If there are clients to be read or written, the IO threads will call the readQueryFromClient or writeToClient function to process them.
Finally, I would like to remind you again, the multi IO threads themselves do not execute commands, they only use parallelism to read data and parse commands or write server data back (in the next class, I will introduce the source code implementation of this part in conjunction with the atomicity guarantee of distributed locks). Therefore, the main IO thread is still responsible for executing commands. This is important for your understanding of the multi IO thread mechanism and can prevent you from misunderstanding that Redis executes commands in parallel with multiple threads.
In this way, the optimizations we made for Redis’s single main IO thread are still effective, such as avoiding big keys and blocking operations.
Question for each lesson #
The Redis multi IO thread mechanism uses the startThreadedIO
function and stopThreadedIO
function to set the activation flag io_threads_active
to 1 and 0, respectively. In addition, these two functions also perform unlocking and locking operations on the thread mutex array, as shown below. Do you know why these two functions perform locking and unlocking operations?
void startThreadedIO(void) {
...
for (int j = 1; j < server.io_threads_num; j++)
pthread_mutex_unlock(&io_threads_mutex[j]); // Unlocking the mutex corresponding to each thread in the mutex array
server.io_threads_active = 1;
}
void stopThreadedIO(void) {
...
for (int j = 1; j < server.io_threads_num; j++)
pthread_mutex_lock(&io_threads_mutex[j]); // Locking the mutex corresponding to each thread in the mutex array
server.io_threads_active = 0;
}
Feel free to share your answers and thought process in the comments. If you find this helpful, please share today’s content with more friends.