26 Should Read Locks Be Allowed to Queue What Is the Promotion and Demotion of Read Write Locks

26 Should Read Locks Be Allowed to Queue What Is the Promotion and Demotion of ReadWriteLocks #

In this lesson, we will mainly discuss whether read locks should be allowed to jump the queue and what is the upgrade and downgrade of read-write locks.

Read lock queue jumping strategy #

First, let’s take a look at the queue jumping strategy for read locks. Let’s quickly review the concept of fairness and unfairness in locks from Lesson 24. In ReentrantLock, if the lock is set to be unfair, it can jump the queue at the moment the lock is released without waiting in line. Is the strategy the same for read-write locks?

First, we see that ReentrantReadWriteLock can be set to be fair or unfair, as shown below:

Fair lock:

ReentrantReadWriteLock reentrantReadWriteLock = new ReentrantReadWriteLock(true);

Unfair lock:

ReentrantReadWriteLock reentrantReadWriteLock = new ReentrantReadWriteLock(false);

If it is a fair lock, we pass true as the argument in the constructor; if it is an unfair lock, we pass false as the argument. By default, it is an unfair lock. Before acquiring a read lock, the thread will check the readerShouldBlock() method, and before acquiring a write lock, the thread will check the writerShouldBlock() method to decide whether to allow queue jumping or not.

First, let’s take a look at the implementation of these two methods in fair locks:

final boolean writerShouldBlock() {

    return hasQueuedPredecessors();

}

final boolean readerShouldBlock() {

    return hasQueuedPredecessors();

}

It is obvious that in the case of fair locks, regardless of whether the lock is for reading or writing, if there are threads waiting in the queue, i.e., when hasQueuedPredecessors() returns true, both writer and reader will block. In other words, queue jumping is not allowed, which is in line with the fairness principle of fair locks.

Now let’s take a look at the implementation of unfair locks:

final boolean writerShouldBlock() {

    return false; // writers can always barge

}

final boolean readerShouldBlock() {

    return apparentlyFirstQueuedIsExclusive();

}

In the writerShouldBlock() method, it always returns false, indicating that a thread trying to acquire the write lock can jump the queue at any time, which is consistent with the design philosophy of ReentrantLock. However, this is not the case for read locks. The strategy implemented here is interesting. Let’s take a look at the following scenario:

Suppose Thread 2 and Thread 4 are currently reading, and Thread 3 wants to write. However, since Thread 2 and Thread 4 already hold read locks, Thread 3 enters the waiting queue. At this time, Thread 5 suddenly comes and wants to jump the queue to acquire a read lock:

img In this scenario, there are two possible strategies to handle it:

Strategy 1: Allow queue jumping #

Since there are threads reading and Thread 5 does not add much burden to them because they can share the lock, the first strategy is to let Thread 5 join Thread 2 and Thread 4 to read together.

This strategy seems to increase efficiency, but there is a serious problem. If the number of threads waiting to read keeps increasing, such as Thread 6, then Thread 6 can also jump the queue. This will cause the read lock to not be released for a long time, resulting in Thread 3 unable to acquire the write lock for a long time. The thread that needs the write lock will enter a “starvation” state and be unable to execute for a long time.

img

Strategy 2: Do not allow queue jumping #

This strategy considers that Thread 3 has already been waiting in advance. Although it is more efficient for Thread 5 to jump the queue directly, we still let Thread 5 queue up and wait:

img According to this strategy, Thread 5 will be put into the waiting queue and placed behind Thread 3, allowing Thread 3 to execute first. This can avoid the “starvation” state, which is beneficial for the robustness of the program. Thread 5 will only have a chance to run after Thread 3 finishes running. This way, no one will wait for too long.

img

Therefore, even if it is an unfair lock, if the head node of the waiting queue is a thread that is trying to acquire the write lock, the read lock is still not allowed to jump the queue in order to avoid “starvation”.

Demonstration of Strategy Selection #

The choice of strategy depends on the specific lock implementation. The implementation of ReentrantReadWriteLock has wisely chosen Strategy 2.

Now let’s demonstrate the above scenario with actual code.

The code for demonstrating the strategy is as follows:

/**
 * Description: Demonstrate that read locks do not jump the queue
 */
public class ReadLockJumpQueue {

    private static final ReentrantReadWriteLock reentrantReadWriteLock = new ReentrantReadWriteLock();

    private static final ReentrantReadWriteLock.ReadLock readLock = reentrantReadWriteLock.readLock();

    private static final ReentrantReadWriteLock.WriteLock writeLock = reentrantReadWriteLock.writeLock();

    private static void read() {
        readLock.lock();
        try {
            System.out.println(Thread.currentThread().getName() + " acquired read lock and is reading");
            Thread.sleep(2000);
        } catch (InterruptedException e) {
            e.printStackTrace();
        } finally {
            System.out.println(Thread.currentThread().getName() + " released read lock");
            readLock.unlock();
        }
    }

    private static void write() {
        writeLock.lock();
        try {
            System.out.println(Thread.currentThread().getName() + " acquired write lock and is writing");
            Thread.sleep(2000);
        } catch (InterruptedException e) {
            e.printStackTrace();
        } finally {
            System.out.println(Thread.currentThread().getName() + " released write lock");
            writeLock.unlock();
        }
    }

    public static void main(String[] args) throws InterruptedException {
        new Thread(() -> read(),"Thread-2").start();
        new Thread(() -> read(),"Thread-4").start();
        new Thread(() -> write(),"Thread-3").start();
        new Thread(() -> read(),"Thread-5").start();
    }
}

The result of the above code is:

    Thread-2 obtains the read lock and is reading
    Thread-4 obtains the read lock and is reading
    Thread-2 releases the read lock
    Thread-4 releases the read lock
    Thread-3 obtains the write lock and is writing
    Thread-3 releases the write lock
    Thread-5 obtains the read lock and is reading
    Thread-5 releases the read lock

From this result, we can see that the implementation of ReentrantReadWriteLock chooses the "no-reordering" strategy, which greatly reduces the probability of "starvation". (If the result is different from the course, you can increase the sleep time of each thread by 100ms after starting the thread to ensure the execution order of the threads).

### Lock Upgrade and Downgrade

#### Code Demonstration of Lock Downgrade with Read-Write Lock

Now let's take a look at lock upgrade and downgrade. First, let's look at this code, which demonstrates how to use lock downgrade when updating the cache.

```java
public class CachedData {

    Object data;

    volatile boolean cacheValid;

    final ReentrantReadWriteLock rwl = new ReentrantReadWriteLock();

    void processCachedData() {

        rwl.readLock().lock();

        if (!cacheValid) {

            // Before acquiring the write lock, we must release the read lock.

            rwl.readLock().unlock();

            rwl.writeLock().lock();

            try {

                // We need to check the data again for validity because another thread might have modified it while we released the read lock and acquired the write lock.

                if (!cacheValid) {

                    data = new Object();

                    cacheValid = true;

                }

                // Without releasing the write lock, directly acquire the read lock. This is lock downgrade.

                rwl.readLock().lock();

            } finally {

                // The write lock is released, but the read lock is still held.

                rwl.writeLock().unlock();

            }

        }

        try {

            System.out.println(data);

        } finally {
```java
// Release the read lock
rwl.readLock().unlock();

In this code snippet, there is a read-write lock. The most important part is the processCachedData method. In this method, it first acquires a read lock using rwl.readLock().lock(). It checks if the cache is valid. If it is valid, it skips the entire if statement. If the cache is invalid, it means that we need to update the cache. Since we need to update the cache, the previously acquired read lock is not enough. We need to acquire a write lock.

Before acquiring the write lock, we first release the read lock, and then obtain the write lock using rwl.writeLock().lock(). Then comes the classic try finally statement. In the try block, we first check if the cache is valid. Because during the process of releasing the read lock and acquiring the write lock, other threads may have modified the data first. Therefore, we need to perform a second check here.

If we find that the cache is invalid, we use new Object() to indicate that we have obtained new data and set the cache flag to true to make the cache valid. Since we want to print the value of data later, we cannot release all the locks at this point. Our choice is to directly acquire a read lock without releasing the write lock, which is what the line rwl.readLock().lock() does. Then, while holding the read lock, we release the write lock, and finally, in the try block at the bottom, we print the value of data.

This is a very typical code that uses lock downgrading.

You might wonder, why go through all this trouble of downgrading the lock? Why not just hold the highest level write lock all the time? This way, no one can interfere with my work, and it will always be thread-safe.

Why do we need lock downgrading? #

If we keep using the write lock in the previous method and release it only at the end, although it is indeed thread-safe, it is unnecessary because we only have one code section that modifies the data:

data = new Object();

After that, we only read data. If we continue to use the write lock, multiple threads cannot read at the same time. Holding the write lock is a waste of resources and reduces overall efficiency. Therefore, using lock downgrading is a good solution to improve overall performance.

Supports lock downgrading, does not support upgrading #

If we run the following code, trying to acquire a write lock without releasing the read lock, which means lock upgrading, the thread will be blocked and the program cannot run.

final static ReentrantReadWriteLock rwl = new ReentrantReadWriteLock();

public static void main(String[] args) {

    upgrade();

}

public static void upgrade() {

    rwl.readLock().lock();

    System.out.println("Acquired the read lock");

    rwl.writeLock().lock();

    System.out.println("Upgrade successful");

}

This code will print “Acquired the read lock”, but it will not print “Upgrade successful” because ReentrantReadWriteLock does not support upgrading from a read lock to a write lock.

Why doesn’t it support lock upgrading? #

We know that the characteristic of a read-write lock is that multiple threads can hold a read lock at the same time, but only one thread can hold a write lock, and there cannot be a situation where both a read lock and a write lock are held at the same time.

Because it is impossible to hold a read lock and a write lock at the same time, when upgrading a write lock, we need to wait for all read locks to be released before the upgrade can take place.

Let’s assume there are three threads, A, B, and C, and they all hold read locks. If thread A tries to upgrade from a read lock to a write lock, it must wait for threads B and C to release their read locks. If over time, B and C gradually release their read locks, then thread A can successfully upgrade and acquire the write lock.

However, let’s consider a special case. Suppose both thread A and thread B want to upgrade to write locks. In this case, thread A needs to wait for all threads, including thread B, to release their read locks. Similarly, thread B also needs to wait for all threads, including thread A, to release their read locks. This is a typical deadlock situation. Neither thread is willing to be the first to release its own lock.

But lock upgrading is not impossible, and there are feasible solutions. If we ensure that only one thread can upgrade at a time, we can guarantee thread safety. However, the most common implementation, ReentrantReadWriteLock, does not support this.

Summary #

For ReentrantReadWriteLock:

  • Queuing strategy:
    • Under the fair strategy, if there are already threads in the queue, no thread is allowed to jump the queue.
    • Under the non-fair strategy:
      • If allowing read lock to jump the queue, it may lead to continuous successful jumping of subsequent threads due to the fact that multiple threads can hold read locks simultaneously. This may cause the read lock to never be completely released and the write lock to wait indefinitely. In order to prevent “starvation”, when the head of the waiting queue is a thread trying to acquire the write lock, read lock jumping is not allowed.
      • Write lock can jump the queue at any time, because it is not easy to successfully jump the queue with a write lock. The write lock can only jump the queue when there are no other threads holding read locks or write locks. This prevents “starvation” and allows write lock jumping to improve efficiency.
  • Upgrade/downgrade strategy: Only supports downgrading from a write lock to a read lock, not upgrading from a read lock to a write lock.