04 Connection Pool Don't Let the Connection Pool Backfire

04 Connection Pool Don’t Let the Connection Pool Backfire #

Today, let’s talk about the things to pay attention to when using a connection pool.

In the previous lesson, we learned about the things to consider when using a thread pool. Today, I will talk to you about another important pooling technique, which is the connection pool.

First, let me tell you about the structure of a connection pool. The connection pool generally provides interfaces for acquiring and returning connections to the client, and exposes configurable parameters such as the minimum number of idle connections and the maximum number of connections. Internally, it implements functions such as establishing connections, maintaining connection heartbeats, managing connections, reclaiming idle connections, and checking connection availability. The structure of a connection pool is illustrated below:

img

The connection pools that are commonly used in business projects mainly include database connection pools, Redis connection pools, and HTTP connection pools. So today, I will use these three types of connection pools as examples to talk to you about the common mistakes in using and configuring connection pools.

Pay attention to identifying whether the client SDK is based on connection pooling. #

When using a third-party client for network communication, we first need to determine whether the client SDK is implemented based on connection pooling technology. We know that TCP is a connection-oriented byte-stream protocol:

Being connection-oriented means that a connection needs to be created before it can be used, and the three-way handshake for creating a connection incurs certain overhead.

Being based on a byte stream means that bytes are the smallest unit for sending data. The TCP protocol itself cannot distinguish which bytes constitute a complete message body and cannot perceive whether multiple clients are using the same TCP connection. TCP is only a pipeline for reading and writing data.

If the client SDK does not use connection pooling and directly uses TCP connections, then we need to consider the overhead of establishing a TCP connection each time. Furthermore, because TCP is based on a byte stream, reusing the same connection with multiple threads may lead to thread safety issues.

Let’s first look at the three types of APIs that involve TCP connections provided by client SDKs. When dealing with various third-party clients, only by identifying which type they belong to can we understand how to use them.

APIs with connection pooling and connection separation: This type has an XXXPool class responsible for implementing the connection pooling. It first obtains an XXXConnection from the pool and then uses the obtained connection to make requests to the server. After finishing, the caller needs to return the connection. Generally, the XXXPool is thread-safe and multiple connections can be obtained and returned concurrently, while the XXXConnection itself is not thread-safe. In the structural diagram of the connection pool, XXXPool is the box on the right side, and the left side represents our own code.

APIs with an internal connection pool: This type provides an XXXClient class externally, and requests to the server can be directly made through this class. The class itself maintains a connection pool, and the user of the SDK does not need to worry about acquiring and returning connections. Generally, XXXClient is thread-safe. In the structural diagram of the connection pool, the entire API is represented by the part wrapped in the blue box.

APIs without connection pooling: They are typically named XXXConnection to differentiate between those based on connection pooling and those that use a single connection. It is not recommended to name them XXXClient or simply XXX. The APIs with direct connection are based on a single connection, and each use requires creating and disconnecting a connection, which has moderate performance and is usually not thread-safe. In the structural diagram of the connection pool, this form is equivalent to not having the box of the connection pool on the right side, and the client directly connects to the server to create a connection.

Although the naming conventions for SDKs were mentioned above, it is not ruled out that some clients deviate from them. Therefore, when using third-party SDKs, it is essential to first check the official documentation to understand their best practices or search for terms like “XXX threadsafe/singleton” on websites like StackOverflow to see everyone’s replies. You can also dive into the source code layer by layer until you determine the relationship between the Socket and the client API.

Once the implementation method of the SDK connection pool is clear, we can roughly understand the best practices for using the SDK:

If it is the separation method, the connection pool itself is generally thread-safe and can be reused. Each use needs to acquire a connection from the connection pool and return it after use, which is the responsibility of the caller.

If it is the built-in connection pool, the SDK is responsible for acquiring and returning connections, and the client can directly reuse the connection when using it.

If the SDK does not implement a connection pool (most middleware and database client SDKs support connection pooling), it is usually not thread-safe and has low performance when used in a short connection manner. In this case, it is necessary to consider whether to encapsulate a connection pool by oneself.

Next, I will take Jedis, the most common library used in Java for operating Redis, as an example to analyze from the perspective of source code. I will explain what type of API the Jedis class belongs to, what problems will occur if a connection is directly reused in a multi-threaded environment, and how to fix this problem using best practices.

First, we initialize two sets of data, Key=a, Value=1, Key=b, Value=2, to Redis:

@PostConstruct
public void init() {

    try (Jedis jedis = new Jedis("127.0.0.1", 6379)) {
        Assert.isTrue("OK".equals(jedis.set("a", "1")), "set a = 1 return OK");
        Assert.isTrue("OK".equals(jedis.set("b", "2")), "set b = 2 return OK");
    }
}

Then, we start two threads that share the same Jedis instance. Each thread loops 1000 times and reads the values of Key “a” and “b” separately, checking if they are 1 and 2 respectively:

Jedis jedis = new Jedis("127.0.0.1", 6379);

new Thread(() -> {
    for (int i = 0; i < 1000; i++) {
        String result = jedis.get("a");
        if (!result.equals("1")) {
            log.warn("Expect a to be 1 but found {}", result);
            return;
        }
    }
}).start();

new Thread(() -> {
    for (int i = 0; i < 1000; i++) {
        String result = jedis.get("b");
        if (!result.equals("2")) {
            log.warn("Expect b to be 2 but found {}", result);
            return;
        }
    }
}).start();

TimeUnit.SECONDS.sleep(5);

After executing the program multiple times, we can see various strange error messages in the logs. Some errors include reading the value of Key “b” as 1, abnormal termination of the flow, and connection closure exceptions:

// Error 1 [14:56:19.069] [Thread-28] [WARN ] [.t.c.c.redis.JedisMisreuseController:45 ] - Expect b to be 2 but found 1 [//错误2] { “RedisConnectionException”: “Unexpected end of stream.”, “At”: “redis.clients.jedis.util.RedisInputStream.ensureFill(RedisInputStream.java:202)”, “At”: “redis.clients.jedis.util.RedisInputStream.readLine(RedisInputStream.java:50)”, “At”: “redis.clients.jedis.Protocol.processError(Protocol.java:114)”, “At”: “redis.clients.jedis.Protocol.process(Protocol.java:166)”, “At”: “redis.clients.jedis.Protocol.read(Protocol.java:220)”, “At”: “redis.clients.jedis.Connection.readProtocolWithCheckingBroken(Connection.java:318)”, “At”: “redis.clients.jedis.Connection.getBinaryBulkReply(Connection.java:255)”, “At”: “redis.clients.jedis.Connection.getBulkReply(Connection.java:245)”, “At”: “redis.clients.jedis.Jedis.get(Jedis.java:181)”, “At”: “org.geekbang.time.commonmistakes.connectionpool.redis.JedisMisreuseController.lambda$wrong$1(JedisMisreuseController.java:43)”, “At”: “java.lang.Thread.run(Thread.java:748)” }

[//错误3] { “IOException”: “Socket Closed”, “At”: “java.net.AbstractPlainSocketImpl.getOutputStream(AbstractPlainSocketImpl.java:440)”, “At”: “java.net.Socket$3.run(Socket.java:954)”, “At”: “java.net.Socket$3.run(Socket.java:952)”, “At”: “java.security.AccessController.doPrivileged(Native Method)”, “At”: “java.net.Socket.getOutputStream(Socket.java:951)”, “At”: “redis.clients.jedis.Connection.connect(Connection.java:200)”, … “7 more” }

Let’s analyze the source code of the Jedis class and figure out the reasons.

public class Jedis extends BinaryJedis implements JedisCommands, MultiKeyCommands,
    AdvancedJedisCommands, ScriptingCommands, BasicCommands, ClusterCommands, SentinelCommands, ModuleCommands {

}

public class BinaryJedis implements BasicCommands, BinaryJedisCommands, MultiKeyBinaryCommands,
    AdvancedBinaryJedisCommands, BinaryScriptingCommands, Closeable {

    protected Client client = null;

    ...
}

public class Client extends BinaryClient implements Commands {

}

public class BinaryClient extends Connection {

}

public class Connection implements Closeable {

    private Socket socket;

    private RedisOutputStream outputStream;

    private RedisInputStream inputStream;

}

You can see that Jedis extends BinaryJedis, and BinaryJedis stores a single instance of Client. Finally, Client inherits from Connection, and Connection contains a single instance of Socket and two read/write streams corresponding to the socket. Therefore, one Jedis corresponds to one socket connection. The class diagram is as follows:

img

BinaryClient encapsulates various Redis commands and, ultimately, calls the methods of the base class Connection to send the commands using the Protocol class. By looking at the source code of the sendCommand method of the Protocol class, we can see that it directly operates on RedisOutputStream to write bytes when sending commands.

When reusing Jedis objects in a multithreaded environment, we are actually reusing the RedisOutputStream. If multiple threads are executing operations, it is not guaranteed that the entire command will be written to the socket as a single atomic operation, and there may be other data written to the server before reading:

private static void sendCommand(final RedisOutputStream os, final byte[] command,
    final byte[]... args) {

    try {
        ...
    os.write(ASTERISK_BYTE);
    os.writeIntCrLf(args.length + 1);
    os.write(DOLLAR_BYTE);
    os.writeIntCrLf(command.length);
    os.write(command);
    os.writeCrLf();

    for (final byte[] arg : args) {
      os.write(DOLLAR_BYTE);
      os.writeIntCrLf(arg.length);
      os.write(arg);
      os.writeCrLf();
    }
  } catch (IOException e) {
    throw new JedisConnectionException(e);
  }
}

By looking at this code, we can understand why strange problems occur when using the Jedis object to operate Redis in a multi-threaded scenario.

For example, if write operations interfere with each other and multiple commands are interspersed, it will not be a valid Redis command, causing Redis to close the client connection and disconnect. For example, if thread 1 and 2 write get a and get b operations respectively, and Redis also returns values 1 and 2, but if thread 2 reads data 1 first, it will result in data corruption.

The fix is to use another thread-safe class provided by Jedis, JedisPool, to obtain Jedis instances. JedisPool can be declared as static and shared among multiple threads, acting as a connection pool. When using it, we obtain and return Jedis instances from the JedisPool using the try-with-resources pattern as needed.

private static JedisPool jedisPool = new JedisPool("127.0.0.1", 6379);

new Thread(() -> {
    try (Jedis jedis = jedisPool.getResource()) {
        for (int i = 0; i < 1000; i++) {
            String result = jedis.get("a");
            if (!result.equals("1")) {
                log.warn("Expect a to be 1 but found {}", result);
                return;
            }
        }
    }
}).start();

new Thread(() -> {
    try (Jedis jedis = jedisPool.getResource()) {
        for (int i = 0; i < 1000; i++) {
            String result = jedis.get("b");
            if (!result.equals("2")) {
                log.warn("Expect b to be 2 but found {}", result);
                return;
            }
        }
    }
}).start();

After this fix, the code no longer has thread safety issues. In addition, it is best to use a shutdown hook to close the JedisPool before the program exits:

@PostConstruct
public void init() {

    Runtime.getRuntime().addShutdownHook(new Thread(() -> {
        jedisPool.close();
    }));

}

By looking at the implementation of the close method in the Jedis class, we can see that if Jedis is obtained from a connection pool, the close method will call the pool’s return method to return the connection if it’s broken, or return it normally if it’s not:

public class Jedis extends BinaryJedis implements JedisCommands, MultiKeyCommands,
    AdvancedJedisCommands, ScriptingCommands, BasicCommands, ClusterCommands, SentinelCommands, ModuleCommands {

  protected JedisPoolAbstract dataSource = null;

  @Override
  public void close() {

    if (dataSource != null) {
      JedisPoolAbstract pool = this.dataSource;
      this.dataSource = null;
      if (client.isBroken()) {
        pool.returnBrokenResource(this);
      } else {
        pool.returnResource(this);
      }
    } else {
      super.close();
    }
  }
}

If it is not obtained from a connection pool, the connection will be closed directly, ultimately calling the disconnect method of the Connection class to close the TCP connection:

public void disconnect() {

  if (isConnected()) {
    try {
      outputStream.flush();
      socket.close();
    } catch (IOException ex) {
      broken = true;
      throw new JedisConnectionException(ex);
    } finally {
      IOUtils.closeQuietly(socket);
    }
  }
}

We can see that Jedis can be used independently or in conjunction with a connection pool, which is JedisPool. Let’s take a look at the implementation of JedisPool.

public class JedisPool extends JedisPoolAbstract {

  @Override
  public Jedis getResource() {

    Jedis jedis = super.getResource()
    jedis.setDataSource(this);
    return jedis;

  }

  @Override
  protected void returnResource(final Jedis resource) {

    if (resource != null) {
      try {
        resource.resetState();
        returnResourceObject(resource);
      } catch (Exception e) {
        returnBrokenResource(resource);
        throw new JedisException("Resource is returned to the pool as broken", e);
      }
    }
  }
}

public class JedisPoolAbstract extends Pool<Jedis> {

}

public abstract class Pool<T> implements Closeable {
  protected GenericObjectPool<T> internalPool;

}

The getResource method of JedisPool sets itself as the data source for the Jedis object after obtaining it. JedisPool, which extends JedisPoolAbstract, and the latter extends the abstract class Pool, which internally maintains the Apache Commons GenericObjectPool. The connection pool of JedisPool is based on GenericObjectPool.

Now that we understand the principle, we can use Jedis with confidence.

Always ensure reuse when using connection pools #

When introducing thread pools, we have emphasized that pools are meant for reuse. Otherwise, the cost of using them would be greater than creating a single object each time. This is particularly true for connection pools, for the following reasons:

When creating a connection pool, multiple connections are likely created at once. Most connection pools maintain a certain number of minimum connections for performance reasons, which can be directly used during initialization (since the initialization process of connection pools is usually one-time). If you create a connection pool each time you use it, you may end up creating N connections even if you only use one.

Connection pools generally have some management modules, represented by the green part in the structure diagram of the connection pool. For example, most connection pools have the concept of idle timeout. The connection pool checks the idle time of the connections and periodically recovers the idle connections to reduce the number of active connections to the minimum value (the configuration value of idle connections), and reduce the pressure on the server. In general, idle connections are managed by independent threads, so starting a connection pool is equivalent to starting N threads (since the connection pool with idle detection also starts an additional thread). In addition, some connection pools require separate threads to handle connection keep-alive and other functions. Therefore, starting a connection pool is equivalent to starting N threads.

In addition to the cost of usage, not releasing the connection pool can also cause thread leaks. Next, I will use Apache HttpClient as an example to talk about the problems with not reusing connection pools.

First, create a CloseableHttpClient, set it to use the PoolingHttpClientConnectionManager connection pool, and enable the idle connection eviction policy with a maximum idle time of 60 seconds. Then, use this connection to request a server interface that returns the “OK” string:

@GetMapping("wrong1")
public String wrong1() {

    CloseableHttpClient client = HttpClients.custom()
            .setConnectionManager(new PoolingHttpClientConnectionManager())
            .evictIdleConnections(60, TimeUnit.SECONDS).build();

    try (CloseableHttpResponse response = client.execute(new HttpGet("http://127.0.0.1:45678/httpclientnotreuse/test"))) {
        return EntityUtils.toString(response.getEntity());
    } catch (Exception ex) {
        ex.printStackTrace();
    }
    return null;
}

After accessing this interface several times, check the application thread status. You can see that there are many threads called “Connection evictor” that are not destroyed:

img

Perform a few seconds of load testing on this interface (using wrk, with 1 concurrency and 1 connection). You can see that more than three thousand TCP connections have been established to port 45678 (where one connection is from the load testing client to Tomcat, and most of them are from HttpClient to Tomcat):

img

Fortunately, with the idle connection recovery policy, the connections will be in the CLOSE_WAIT state after 60 seconds and will eventually be closed completely.

img

These two points prove that CloseableHttpClient belongs to the second type, which is an API with an internal connection pool. The best practice is to reuse it.

The way to reuse it is simple. You can declare CloseableHttpClient as a static variable, create it only once, and close the connection pool through the addShutdownHook hook before the JVM shuts down. When using it, you can directly use CloseableHttpClient without creating it every time.

First, define a “right” interface to implement server interface invocation:

private static CloseableHttpClient httpClient = null;

static {

    // Alternatively, you can define CloseableHttpClient as a Bean and close this HttpClient in a method annotated with @PreDestroy
    httpClient = HttpClients.custom().setMaxConnPerRoute(1).setMaxConnTotal(1).evictIdleConnections(60, TimeUnit.SECONDS).build();

    Runtime.getRuntime().addShutdownHook(new Thread(() -> {
        try {
            httpClient.close();
        } catch (IOException ignored) {
        }
    }));
}

@GetMapping("right")
public String right() {

    try (CloseableHttpResponse response = httpClient.execute(new HttpGet("http://127.0.0.1:45678/httpclientnotreuse/test"))) {
        return EntityUtils.toString(response.getEntity());
    } catch (Exception ex) {
        ex.printStackTrace();
    }
    return null;
}

Then, redefine a “wrong2” interface to fix the previously mentioned code that creates CloseableHttpClient as needed, and ensure that the connection pool can be closed after each use:

@GetMapping("wrong2")
public String wrong2() {

    try (CloseableHttpClient client = HttpClients.custom()
            .setConnectionManager(new PoolingHttpClientConnectionManager())
            .evictIdleConnections(60, TimeUnit.SECONDS).build();

         CloseableHttpResponse response = client.execute(new HttpGet("http://127.0.0.1:45678/httpclientnotreuse/test"))) {
            return EntityUtils.toString(response.getEntity());
        } catch (Exception ex) {
        ex.printStackTrace();
    }
    return null;
}

Use wrk to perform 60 seconds of load testing on the “wrong2” and “right” interfaces respectively. You can see the performance difference between the two usage methods: the QPS for creating a connection pool each time is 337, while the QPS for reusing the connection pool is 2022:

img

Such a significant performance difference is obviously due to TCP connection reuse. You may have noticed that when defining the connection pool, I set the maximum number of connections to 1. Therefore, the reuse method should always be the same connection, while the new connection pool method should create a new TCP connection each time.

Next, we will use the network capture tool Wireshark to verify this.

If you call the “wrong2” interface and create a new connection pool each time to send an HTTP request, you can see from Wireshark that the client port for accessing the server on port 45678 is always new for each request. Here, I made three requests, and the client ports for the HttpClient accessing the server on port 45678 are 51677, 51679, and 51681 respectively:

img

In other words, each time it is a new TCP connection. You can also see the complete TCP handshake and termination process by removing the HTTP filter condition:

img

The behavior of the “right” interface using the reuse method is completely different. You can see that the client port 61468 for the #41 HTTP request in the second time is the same as that of the first connection #23, and Wireshark also indicates that in the entire TCP session, the current #41 request is the second request, the previous one is #23, and the next one is #75:

img

Only when the TCP connection is idle for more than 60 seconds will it be disconnected and the connection pool will create a new connection. You can try observing this process with Wireshark.

Next, let’s continue discussing connection pool configuration issues.

Connection Pool Configuration is Not Set in Stone #

To facilitate setting the properties of connection points according to capacity planning, the connection pool provides many parameters, including minimum (idle) connections, maximum connections, idle connection survival time, and connection survival time. Among them, the most important parameter is the maximum number of connections, which determines the upper limit of the number of connections that the connection pool can use. After reaching the limit, new requests need to wait for other requests to release the connection.

However, setting the maximum number of connections is not the bigger, the better. If it is set too large, not only will the client need to consume too many resources to maintain the connection, but more importantly, because the server corresponds to multiple clients, each client maintains a large number of connections, which will bring greater pressure to the server. This pressure is not only memory pressure, but also consider this: if the server’s network model is one TCP connection per thread, then several thousand connections means several thousand threads, and so many threads will cause a large amount of thread context switch overhead.

Of course, if the maximum number of connections in the connection pool is set too small, it is likely that the waiting time to obtain a connection will be too long, resulting in low throughput or even timeout when unable to obtain a connection.

Next, let’s simulate a situation where the increase in pressure causes the database connection pool to be full, in order to practice how to confirm the usage of the connection pool and optimize the parameters accordingly.

First, define a user registration method, and use the @Transactional annotation to open a transaction for the method. It includes a sleep of 500 milliseconds, and each database transaction corresponds to a TCP connection, so the time of more than 500 milliseconds will occupy a database connection:


@Transactional
public User register(){
    User user=new User();
    user.setName("new-user-"+System.currentTimeMillis());
    userRepository.save(user);

    try {
        TimeUnit.MILLISECONDS.sleep(500);
    } catch (InterruptedException e) {
        e.printStackTrace();
    }
    return user;
}

Then, modify the configuration file to enable register-mbeans, so that the Hikari connection pool can register connection pool-related statistics information through JMX MBean for easy observation of the connection pool:


spring.datasource.hikari.register-mbeans=true

After starting the program and connecting to the process through JConsole, you can see that by default the maximum number of connections is 10: img

Using wrk to perform pressure testing on the application, you can see that the number of connections suddenly goes from 0 to 10, with 20 threads waiting to obtain connections: img

Soon, there will be exceptions that cannot obtain a database connection, as shown below:

[15:37:56.156] [http-nio-45678-exec-15] [ERROR] [.a.c.c.C.[.[.[/].[dispatcherServlet]:175 ] - Servlet.service() for servlet [dispatcherServlet] in context with path [] threw exception [Request processing failed; nested exception is org.springframework.dao.DataAccessResourceFailureException: unable to obtain isolated JDBC connection; nested exception is org.hibernate.exception.JDBCConnectionException: unable to obtain isolated JDBC connection] with root cause
java.sql.SQLTransientConnectionException: HikariPool-1 - Connection is not available, request timed out after 30000ms.

From the exception message, you can see that the database connection pool is HikariPool. The solution is simple, just modify the configuration file and adjust the maximum connection parameter of the database connection pool to 50.


spring.datasource.hikari.maximum-pool-size=50

Then, observe whether this parameter is suitable for the current pressure, and whether it satisfies the requirements and does not occupy too many resources. From the monitoring perspective, this adjustment is reasonable, with half of the redundant resources and no more threads waiting for connections: img

In this demo, I know that the pressure test corresponds to using about 25 concurrent connections, so I directly set the maximum connections of the connection pool to 50. In a real situation, as long as the database can withstand it, you can choose to set a large enough number of connections when encountering connection limits, and then observe the final application concurrency, and set the final maximum connection based on the actual concurrency.

In fact, it’s a bit late to see the error log and then adjust. The more appropriate practice is to continuously monitor important resources like the database connection pool and set half of the usage as the alarm threshold. When an alarm occurs, expand it in time.

In this case, I demonstrated how the effect of parameter configuration can be viewed through JConsole. In production, you need to continuously monitor the relevant data and integrate it into the metric monitoring system for continuous monitoring.

Here, it is emphasized that when modifying the configuration parameters, you must verify whether they take effect, and confirm whether the parameters are effective and reasonable in the monitoring system. The reason for the emphasis is because there are pitfalls here.

I have encountered such an incident before. The application was preparing to expand for a big promotion event, and the maximum number of connections maxActive in the database configuration file was increased from 50 to 100. After the modification, it did not pass the monitoring verification. As a result, on the day of the big promotion, the application had exhausted the connection pool because the number of connections was not enough.

After investigation, it was found that the modified connection number did not take effect at that time. The reason was that although the application originally used the Druid connection pool, the framework was upgraded later and the connection pool was replaced with the Hikari implementation. The original configuration was actually invalid, so the modified parameter configuration would not take effect either.

Therefore, when tuning the connection pool, seeing is believing.

Key Review #

Today, I have discussed three major issues related to connection pool implementation, usage, and parameter configuration, using Redis connection pool, HTTP connection pool, and database connection pool as examples, which are the most commonly used in business code.

There are three ways to implement connection pools in client SDK, including pool and connection separation, internal connection pooling, and non-pooling. In order to use connection pools correctly, it is necessary to first identify the implementation method of the connection pool. For example, Jedis API implements the pool and connection separation method, while Apache HttpClient uses the built-in connection pool API.

As for the usage, there are two main points: firstly, ensuring that the connection pool is reusable, and secondly, explicitly closing the connection pool and releasing resources as much as possible before the program exits. The original intention of designing connection pools is to maintain a certain number of connections so that they can be used as needed. Although retrieving connections from the connection pool is fast, the initialization of the connection pool is relatively slow, requiring the initialization of some management modules and the initial minimum number of idle connections. If the connection pool is not reusable, its performance will be worse than creating a single connection on demand.

Finally, the most important parameter in connection pool configuration is the maximum number of connections. Many high-concurrency applications often have performance issues due to insufficient maximum connections. However, the maximum number of connections should not be set too large, but just enough. It is important to note that for important connection pools such as database connection pools, HTTP connection pools, and Redis connection pools, a comprehensive monitoring and alarm mechanism must be established, and parameter configuration should be adjusted in a timely manner based on capacity planning.

Today, I have put all the code I used on GitHub, and you can click on this link to view it.

Reflection and Discussion #

With the connection pool, the process of acquiring a connection involves getting it from the pool, and if there are not enough connections, the pool will create new connections. At this point, there are usually two timeout settings for acquiring a connection: one is the maximum waiting time for acquiring a connection from the pool, often referred to as connectRequestTimeout or connectWaitTimeout; the other is the timeout for the three-way handshake when establishing a new TCP connection in the connection pool, often referred to as connectTimeout. For JedisPool, Apache HttpClient, and Hikari database connection pool, do you know how to set these two parameters?

When using an SDK with a connection pool, the key is to identify whether it implements an internal connection pool and try to reuse the client as much as possible. For MongoDB in NoSQL, when using the MongoDB Java driver, should the MongoClient class be created each time or reused? Can you find the answer in the official documentation?

Regarding connection pools, have you encountered any pitfalls? I am Zhu Ye. Feel free to leave a comment and share your experiences in the comment section. Also, feel free to share this article with your friends or colleagues for further discussion.