20 What's the Difference Between Concurrent Linked Queue and Linked Blocking Queue in the Concurrency Package

这两个队列都是 Java 并发包中的线程安全队列,但有一些区别。

ConcurrentLinkedQueue:

  • 通过非阻塞算法实现,是一个基于链接节点的无界线程安全队列。
  • 适用于高并发的场景,因为它提供了高吞吐量的性能。
  • 它不允许插入 null 元素。
  • 具有较低的内存开销,但不支持阻塞操作。

LinkedBlockingQueue:

  • 是一个基于链表节点的有界或无界的线程安全队列。
  • 插入和提取元素时可以选择阻塞或超时等待。
  • 可以在创建时指定队列的容量,也可以不指定(无界队列)。
  • 允许插入 null 元素。
  • 由于支持阻塞操作,适用于控制生产者-消费者模式等场景。

根据具体的使用需求,可以选择使用适合的队列类型。如果需要高吞吐量的非阻塞队列,可以使用 ConcurrentLinkedQueue。如果需要支持阻塞操作和控制队列容量,可以使用 LinkedBlockingQueue。

20 What’s the difference between ConcurrentLinkedQueue and LinkedBlockingQueue in the concurrency package #

Sometimes we tend to call all the containers under the concurrent package as concurrent containers, but strictly speaking, containers like ConcurrentLinkedQueue are the ones that truly represent concurrency.

Regarding the differences between them in the given question:

  • Concurrent types are based on lock-free algorithms and generally provide higher throughput in common multi-threaded access scenarios.
  • On the other hand, LinkedBlockingQueue is based on locks and provides blocking methods of the BlockingQueue interface.

I don’t know if you have noticed, but the containers (Queue, List, Set), and Map provided by the java.util.concurrent package can be roughly distinguished into three categories based on their names: Concurrent*, CopyOnWrite, and Blocking. They are all thread-safe containers, and we can simply understand that:

  • Concurrent types have lower modification overhead compared to containers like CopyOnWrite.
  • However, everything comes at a cost. Concurrent containers often provide weaker traversal consistency. You can understand the so-called weak consistency as follows: for example, when you traverse with an iterator and the container is modified, the iterator can still continue the traversal.
  • Corresponding to weak consistency is the commonly seen behavior of “fail-fast” in synchronized containers. That is, if the container detects modifications during the traversal, it throws a ConcurrentModificationException and stops the traversal.
  • Another manifestation of weak consistency is that operations like size may not be 100% accurate.
  • At the same time, the performance of reads has a certain level of uncertainty.

Analysis of Test Points #

Today’s question is another lead-in, testing whether you understand the design purposes and implementation differences of different containers within the concurrent package.

The queue is a very important data structure, and in our daily development, many data transmissions between threads rely on it. The various thread pools provided by the Executor framework are also inseparable from queues. Interviewers can assess from different angles, such as:

  • Which queues are bounded and which ones are unbounded? (Many students have given feedback on this question)
  • How to choose the appropriate queue implementation for specific scenarios?
  • From the perspective of source code, how are commonly used thread-safe queues implemented, and what improvements have been made to improve performance?

In order to better understand this lecture, you need to have a grasp of some basic knowledge about queues and data structures. If your knowledge in this area is relatively weak, “Data Structures and Algorithm Analysis” is a comprehensive reference book, and this column should focus on the characteristics of the Java field as much as possible.

Knowledge Expansion #

Overview of Thread-Safe Queues

In my column lecture 8 I introduced that common collections like LinkedList are Deques, except they are not thread-safe. The following figure shows various thread-safe queue implementations provided by the Java concurrent class library. Note that the figure does not include non-thread-safe queues.

We can classify them from different perspectives. From the perspective of basic data structures, there are two special implementations of Deque: ConcurrentLinkedDeque and LinkedBlockingDeque. The focus of a Deque is to support insertion and removal at both ends of the queue, so it provides specific methods such as:

From these perspectives, we can understand the main functional differences between ConcurrentLinkedDeque and LinkedBlockingQueue, which is sufficient for daily development. However, if we delve deeper, we usually pay more attention to the following aspects.

In terms of behavioral characteristics, most Queues implement the BlockingQueue interface. In addition to regular queue operations, Blocking means they provide specific blocking operations, such as waiting for an element to be enqueued when retrieving (take) or waiting for the queue to have available space when inserting (put).

  /**
   * Retrieves and removes the head of this queue, waiting if necessary until an element becomes available.
   * ...
   */
  E take() throws InterruptedException;
  
  /**
   * Inserts the specified element into this queue, waiting if necessary for space to become available.
   * ...
   */
  void put(E e) throws InterruptedException;

Another point that is often examined in BlockingQueue is whether it is bounded or unbounded. This point often affects our choices in application development. Here is a simple summary.

  • ArrayBlockingQueue is the most typical bounded queue. It internally uses a final array to store data, and the size of the array determines the boundary of the queue. Therefore, when creating an ArrayBlockingQueue, the capacity needs to be specified, such as:

    public ArrayBlockingQueue(int capacity, boolean fair)
    
  • LinkedBlockingQueue can easily be misunderstood as unbounded, but in fact, both its behavior and internal code are based on bounded logic. If we do not specify the capacity when creating the queue, its capacity limit is automatically set to Integer.MAX_VALUE, making it an unbounded queue.

  • SynchronousQueue is a very peculiar queue implementation. Every remove operation waits for an insert operation, and vice versa. So what is the capacity of this queue? Is it 1? Actually, it’s not. Its internal capacity is 0.

  • PriorityBlockingQueue is an unbounded priority queue, though strictly speaking, its size is always subject to system resource constraints.

  • DelayedQueue and LinkedTransferQueue are also unbounded queues. For unbounded queues, there is a natural consequence that put operations never encounter the blocking situation observed in other BlockingQueues.

If we analyze the underlying implementation of different queues, we can see that most BlockingQueues are implemented using locks. Let’s take a look at the typical LinkedBlockingQueue.

/** Lock held by take, poll, etc */
private final ReentrantLock takeLock = new ReentrantLock();

/** Wait queue for waiting takes */
private final Condition notEmpty = takeLock.newCondition();

/** Lock held by put, offer, etc */
private final ReentrantLock putLock = new ReentrantLock();

/** Wait queue for waiting puts */
private final Condition notFull = putLock.newCondition();

When I introduced the usage of condition variables with ReentrantLock, I analyzed ArrayBlockingQueue. I don’t know if you noticed that the condition variables in LinkedBlockingQueue version are different. notEmpty and notFull are both condition variables of the same reentrant lock, while LinkedBlockingQueue improves the granularity of lock operations by using different locks for head and tail operations. As a result, in common scenarios, its throughput is relatively better.

The take method in LinkedBlockingQueue is different from the implementation in ArrayBlockingQueue. Since its internal structure is a linked list, we need to maintain the count value ourselves. Please refer to the following code.

public E take() throws InterruptedException {
    final E x;
    final int c;
    final AtomicInteger count = this.count;
    final ReentrantLock takeLock = this.takeLock;
    takeLock.lockInterruptibly();
    try {
        while (count.get() == 0) {
            notEmpty.await();
        }
        x = dequeue();
        c = count.getAndDecrement();
        if (c > 1)
            notEmpty.signal();
    } finally {
        takeLock.unlock();
    }
    if (c == capacity)
        signalNotFull();
    return x;
}

Similar to ConcurrentLinkedQueue, etc., it is based on CAS (Compare and Swap) lock-free technology, which does not require locking for each operation, thus exhibiting better scalability.

Comparatively uncommon, SynchronousQueue underwent major changes in its implementation in Java 6. It replaced the original lock-based logic with CAS, resulting in lower synchronization overhead. It is the default queue for Executors.newCachedThreadPool().

Scenarios and Use Cases for Queues

In practical development, I mentioned that Queue is widely used in producer-consumer scenarios, implemented using BlockingQueue. Due to the provided waiting mechanism, we can avoid many coordination tasks. You can refer to the sample code below:

import java.util.concurrent.ArrayBlockingQueue;
import java.util.concurrent.BlockingQueue;

public class ConsumerProducer {
    public static final String EXIT_MSG = "Good bye!";
    public static void main(String[] args) {
        // Use a smaller queue to better demonstrate its impact in the output
        BlockingQueue<String> queue = new ArrayBlockingQueue<>(3);
        Producer producer = new Producer(queue);
        Consumer consumer = new Consumer(queue);
        new Thread(producer).start();
        new Thread(consumer).start();
    }


    static class Producer implements Runnable {
        private BlockingQueue<String> queue;
        public Producer(BlockingQueue<String> q) {
            this.queue = q;
        }

        @Override
        public void run() {
            for (int i = 0; i < 20; i++) {
                try {
                    Thread.sleep(5L);
                    String msg = "Message" + i;
                    System.out.println("Produced new item: " + msg);
                    queue.put(msg);
                } catch (InterruptedException e) {
                    e.printStackTrace();
                }
            }

            try {
                System.out.println("Time to say good bye!");
                queue.put(EXIT_MSG);
            } catch (InterruptedException e) {
                e.printStackTrace();
            }
        }
    }

    static class Consumer implements Runnable {
        private BlockingQueue<String> queue;
        public Consumer(BlockingQueue<String> q) {
            this.queue = q;
        }

        @Override
        public void run() {
            try {
                String msg;
                while (!EXIT_MSG.equalsIgnoreCase((msg = queue.take()))) {
                    System.out.println("Consumed item: " + msg);
                    Thread.sleep(10L);
                }
                System.out.println("Got exit message, bye!");
            } catch (InterruptedException e) {
                e.printStackTrace();
            }
        }
    }
}

The above is a typical producer-consumer example. If a non-blocking queue is used, we would have to implement our own logic for polling and condition checking (such as checking if the return value of poll is null), etc. Unless there are specific requirements, implementing using a blocking queue results in simpler and more intuitive code.

Earlier, various queue implementations were introduced. In daily application development, how do we choose?

Taking LinkedBlockingQueue, ArrayBlockingQueue, and SynchronousQueue as examples, let’s analyze them from various aspects based on requirements:

  • Consider the queue boundary requirements in the application scenario. ArrayBlockingQueue has a specific capacity limit, while the capacity of LinkedBlockingQueue depends on whether it is specified during creation. SynchronousQueue, on the other hand, cannot buffer any elements.
  • In terms of space utilization, the array-based structure of ArrayBlockingQueue is more compact because it does not require creating nodes. However, during the initial allocation phase, it requires a continuous memory space, resulting in higher initial memory requirements.
  • In general scenarios, LinkedBlockingQueue has better throughput than ArrayBlockingQueue because it implements finer-grained locking operations.
  • ArrayBlockingQueue has a simple implementation and the performance is more predictable, making it a stable contender.
  • If we need to implement a handoff scenario between two threads, as demonstrated in the previous column, you might choose CountDownLatch. However, SynchronousQueue is also a perfect fit for this scenario. It unifies thread coordination and data transmission, resulting in more standardized code.
  • Surprisingly, in many cases, the performance of SynchronousQueue often exceeds that of other implementations, especially when the size of the queue elements is small.

Today, I analyzed various thread-safe queues in Java, aiming to make the characteristics of each queue clearer from different perspectives, and hopefully reducing your confusion when using them in daily work.

Practice #

Do you have a clear understanding of the topic we discussed today? Today’s content focuses on the perspective of Java itself, and interviewers may also assess from the perspective of algorithms. So, the thinking question I leave you with today is: given a specific data structure, such as a stack, how would you implement a BlockingQueue using it?

Please write your thoughts on this question in the comments. I will select the comments that show careful thinking and send you a learning reward voucher. Feel free to discuss with me.

Are your friends also preparing for interviews? You can “ask your friends to read” and share today’s question with them. Perhaps you can help them.