14 Why You Shouldn't Automatically Create Thread Pools

14 Why You Shouldn’t Automatically Create Thread Pools #

In this lesson, we mainly learn why we shouldn’t automatically create thread pools. The so-called automatic creation of thread pools refers to directly calling various methods of Executors to generate common thread pools we have learned before, such as Executors.newCachedThreadPool(). However, this approach has certain risks. Next, we will analyze the potential problems that automatic creation of thread pools may bring.

FixedThreadPool #

First, let’s take a look at the first type of thread pool, FixedThreadPool. It is a thread pool with a fixed number of threads. As the source code shows, newFixedThreadPool actually calls the ThreadPoolExecutor constructor internally.

public static ExecutorService newFixedThreadPool(int nThreads) { 

    return new ThreadPoolExecutor(nThreads, nThreads,0L, TimeUnit.MILLISECONDS,new LinkedBlockingQueue<Runnable>());

}

By passing parameters to the constructor, a thread pool with the same number of core threads and maximum threads is created, and the number of threads is the parameter we passed in. The key point here is that it uses a LinkedBlockingQueue with no capacity limit as the queue. If we handle the tasks relatively slowly, as the number of requests increases, the number of tasks accumulated in the queue will also increase. Eventually, a large number of accumulated tasks will consume a large amount of memory and cause an OOM (OutOfMemoryError), which will almost affect the entire program and have serious consequences.

SingleThreadExecutor #

The second type of thread pool is SingleThreadExecutor. Let’s analyze the source code for creating it.

public static ExecutorService newSingleThreadExecutor() { 

    return new FinalizableDelegatedExecutorService (new ThreadPoolExecutor(1, 1,0L, TimeUnit.MILLISECONDS,new LinkedBlockingQueue<Runnable>()));

}

As you can see, the principle of newSingleThreadExecutor is the same as that of newFixedThreadPool. The only difference is that the core thread count and the maximum thread count are set directly to 1, but the task queue is still an unbounded LinkedBlockingQueue. Therefore, it will also cause the same problem. When tasks accumulate, it may consume a large amount of memory and cause an OOM.

CachedThreadPool #

The third type of thread pool is CachedThreadPool. The source code for creating it is shown below.

public static ExecutorService newCachedThreadPool() { 

    return new ThreadPoolExecutor(0, Integer.MAX_VALUE,60L, TimeUnit.SECONDS,new SynchronousQueue<Runnable>());

}

The difference between CachedThreadPool and the previous two thread pools lies in the use of SynchronousQueue as the task queue. SynchronousQueue itself does not store tasks but directly forwards them, which is not a problem in itself. However, you will notice that the second parameter of the constructor is set to Integer.MAX_VALUE, which means the maximum number of threads. Since CachedThreadPool does not limit the number of threads, when there are a large number of tasks, it may cause the creation of a large number of threads, exceeding the operating system’s limit and unable to create new threads, or causing insufficient memory.

ScheduledThreadPool and SingleThreadScheduledExecutor #

The fourth type of thread pool, ScheduledThreadPool, and the fifth type, SingleThreadScheduledExecutor, work in the same way. The source code for creating ScheduledThreadPool is shown below.

public static ScheduledExecutorService newScheduledThreadPool(int corePoolSize) { 

    return new ScheduledThreadPoolExecutor(corePoolSize);

}

And ScheduledThreadPoolExecutor is a subclass of ThreadPoolExecutor, and its constructor is called as follows.

public ScheduledThreadPoolExecutor(int corePoolSize) { 

    super(corePoolSize, Integer.MAX_VALUE, 0, NANOSECONDS,new DelayedWorkQueue());

}

From the source code, we can see that it uses a DelayedWorkQueue as the task queue, which is a delayed queue and also an unbounded queue. Therefore, like LinkedBlockingQueue, if there are too many tasks in the queue, it may cause an OOM.

As you can see, these automatically created thread pools all have risks. Comparatively, it is better to manually create thread pools because we can have a clearer understanding of the operating rules of the thread pool. We can not only choose the appropriate number of threads but also reject the submission of new tasks when necessary to avoid the risk of resource exhaustion.