33 Practical Performance Testing for Redis

33 Practical Performance Testing for Redis #

Why do we need performance testing? #

There are many scenarios where performance testing is needed, including the following:

  1. Technology selection, such as testing Memcached and Redis;
  2. Comparing the throughput of single-node Redis and Redis clusters;
  3. Evaluating the performance of different types of storage, such as sets and sorted sets;
  4. Comparing the throughput of enabling persistence and disabling persistence;
  5. Comparing the throughput of optimized and non-optimized configurations;
  6. Comparing the throughput of different Redis versions as a reference for whether to upgrade.

And so on, in cases like these, we need to perform performance testing.

Several methods for performance testing #

Since there are many scenarios where performance testing is used, how do we actually perform performance testing?

Currently, the more mainstream methods of performance testing are divided into two categories:

  1. Writing code to simulate concurrent performance testing;
  2. Using redis-benchmark for testing.

Since writing code for performance testing is not flexible enough and it is difficult to simulate a large number of concurrent connections in a short period of time, the author does not recommend using this method. Fortunately, Redis itself provides us with a performance testing tool called redis-benchmark, so in this article we will focus on introducing the use of redis-benchmark.

Practical exercise on benchmark testing #

redis-benchmark is located in the src directory of Redis. We can use ./redis-benchmark -h to view the usage of benchmark testing. The execution result is as follows:

Usage: redis-benchmark [-h <host>] [-p <port>] [-c <clients>] [-n <requests>] [-k <boolean>]

 -h <hostname>      Server hostname (default 127.0.0.1)
 -p <port>          Server port (default 6379)
 -s <socket>        Server socket (overrides host and port)
 -a <password>      Password for Redis Auth
 -c <clients>       Number of parallel connections (default 50)
 -n <requests>      Total number of requests (default 100000)
 -d <size>          Data size of SET/GET value in bytes (default 3)
 --dbnum <db>       SELECT the specified db number (default 0)
 -k <boolean>       1=keep alive 0=reconnect (default 1)
 -r <keyspacelen>   Use random keys for SET/GET/INCR, random values for SADD
  Using this option the benchmark will expand the string __rand_int__
  inside an argument with a 12 digits number in the specified range
  from 0 to keyspacelen-1. The substitution changes every time a command
  is executed. Default tests use this to hit random keys in the
  specified range.
 -P <numreq>        Pipeline <numreq> requests. Default 1 (no pipeline).
 -e                 If server replies with errors, show them on stdout.
                    (no more than 1 error per second is displayed)
 -q                 Quiet. Just show query/sec values
 --csv              Output in CSV format
 -l                 Loop. Run the tests forever
 -t <tests>         Only run the comma separated list of tests. The test
                    names are the same as the ones produced as output.
 -I                 Idle mode. Just open N idle connections and wait.

It can be seen that redis-benchmark supports the following options:

  • -h <hostname>: Server hostname (default 127.0.0.1).
  • -p <port>: Server port (default 6379).
  • -s <socket>: Server socket (overrides host and port).
  • -a <password>: Password for Redis Auth.
  • -c <clients>: Number of parallel connections (default 50).
  • -n <requests>: Total number of requests (default 100000).
  • -d <size>: Data size of SET/GET value in bytes (default 3).
  • –dbnum <db>: SELECT the specified db number (default 0).
  • -k <boolean>: 1=keep alive, 0=reconnect (default 1).
  • -r <keyspacelen>: Use random keys for SET/GET/INCR, random values for SADD. By using this option, the benchmark will replace the string __rand_int__ inside an argument with a 12-digit number in the specified range from 0 to keyspacelen-1. The substitution changes every time a command is executed. By default, the test scenarios use this to hit random keys within the specified range.
  • -P <numreq>: Pipeline requests. Default is 1 (no pipeline).
  • -q: Quiet mode. Only show query/sec values.
  • –csv: Output the test results in CSV format.
  • -l: Loop mode. The benchmark test will run indefinitely.
  • -t <tests>: Only run the comma-separated list of tests. The test names are the same as the ones generated in the output.
  • -I: Idle mode. Just open N idle connections and wait. It can be seen that redis-benchmark has quite comprehensive functionality.

Basic Usage #

On the machine where the Redis server is installed, we can execute ./redis-benchmark without any parameters. The execution result is as follows:

[@iZ2ze0nc5n41zomzyqtksmZ:src]$ ./redis-benchmark
====== PING_INLINE ======
  100000 requests completed in 1.26 seconds
  50 parallel clients
  3 bytes payload
  keep alive: 1

99.81% <= 1 milliseconds
100.00% <= 2 milliseconds
79302.14 requests per second

====== PING_BULK ======
  100000 requests completed in 1.29 seconds
  50 parallel clients
  3 bytes payload
  keep alive: 1

99.83% <= 1 milliseconds
100.00% <= 1 milliseconds
77459.34 requests per second

====== SET ======
  100000 requests completed in 1.26 seconds
  50 parallel clients
  3 bytes payload
  keep alive: 1

99.80% <= 1 milliseconds
99.99% <= 2 milliseconds
100.00% <= 2 milliseconds
79239.30 requests per second

====== GET ======
  100000 requests completed in 1.19 seconds
  50 parallel clients
  3 bytes payload
  keep alive: 1

99.72% <= 1 milliseconds
99.95% <= 15 milliseconds
100.00% <= 16 milliseconds
100.00% <= 16 milliseconds
84104.29 requests per second

====== INCR ======
  100000 requests completed in 1.17 seconds
  50 parallel clients
  3 bytes payload
  keep alive: 1

99.86% <= 1 milliseconds
100.00% <= 1 milliseconds
85397.09 requests per second

====== LPUSH ======
  100000 requests completed in 1.22 seconds
  50 parallel clients
  3 bytes payload
  keep alive: 1

99.79% <= 1 milliseconds
100.00% <= 1 milliseconds
82169.27 requests per second

====== RPUSH ======
  100000 requests completed in 1.22 seconds
  50 parallel clients
  3 bytes payload
  keep alive: 1

99.71% <= 1 milliseconds
100.00% <= 1 milliseconds
81900.09 requests per second

LPOP #

  • 100,000 requests completed in 1.29 seconds
  • 50 parallel clients
  • 3 bytes payload
  • keep alive: 1

99.78% of requests completed within 1 millisecond 99.95% of requests completed within 13 milliseconds 99.97% of requests completed within 14 milliseconds 100.00% of requests completed within 14 milliseconds Throughput: 77,399.38 requests per second

RPOP #

  • 100,000 requests completed in 1.25 seconds
  • 50 parallel clients
  • 3 bytes payload
  • keep alive: 1

99.82% of requests completed within 1 millisecond 100.00% of requests completed within 1 millisecond Throughput: 80,192.46 requests per second

SADD #

  • 100,000 requests completed in 1.25 seconds
  • 50 parallel clients
  • 3 bytes payload
  • keep alive: 1

99.74% of requests completed within 1 millisecond 100.00% of requests completed within 1 millisecond Throughput: 80,192.46 requests per second

HSET #

  • 100,000 requests completed in 1.21 seconds
  • 50 parallel clients
  • 3 bytes payload
  • keep alive: 1

99.86% of requests completed within 1 millisecond 100.00% of requests completed within 1 millisecond Throughput: 82,440.23 requests per second

SPOP #

  • 100,000 requests completed in 1.22 seconds
  • 50 parallel clients
  • 3 bytes payload
  • keep alive: 1

99.92% of requests completed within 1 millisecond 100.00% of requests completed within 1 millisecond Throughput: 81,699.35 requests per second

LPUSH (needed to benchmark LRANGE) #

  • 100,000 requests completed in 1.26 seconds
  • 50 parallel clients
  • 3 bytes payload
  • keep alive: 1

99.69% of requests completed within 1 millisecond 99.95% of requests completed within 13 milliseconds 99.99% of requests completed within 14 milliseconds 100.00% of requests completed within 14 milliseconds Throughput: 79,176.56 requests per second

LRANGE_100 (first 100 elements) #

  • 100,000 requests completed in 1.25 seconds
  • 50 parallel clients
  • 3 bytes payload
  • keep alive: 1

99.57% of requests completed within 1 millisecond 99.98% of requests completed within 2 milliseconds 100.00% of requests completed within 2 milliseconds Throughput: 80,128.20 requests per second

LRANGE_300 (first 300 elements) #

  • 100,000 requests completed in 1.25 seconds
  • 50 parallel clients
  • 3 bytes payload
  • keep alive: 1

99.91% of requests completed within 1 millisecond 100.00% of requests completed within 1 millisecond Throughput: 80,064.05 requests per second

LRANGE_500 (first 450 elements) #

  • 100,000 requests completed in 1.30 seconds
  • 50 parallel clients
  • 3 bytes payload
  • keep alive: 1

99.78% of requests completed within 1 millisecond 100.00% of requests completed within 1 millisecond Throughput: 76,863.95 requests per second

LRANGE_600 (first 600 elements) #

  • 100,000 requests completed in 1.20 seconds
  • 50 parallel clients
  • 3 bytes payload
  • keep alive: 1

99.85% of requests completed within 1 millisecond 100.00% of requests completed within 1 millisecond Throughput: 83,263.95 requests per second

MSET (10 keys) #

  • 100,000 requests completed in 1.27 seconds
  • 50 parallel clients
  • 3 bytes payload
  • keep alive: 1

99.65% of requests completed within 1 millisecond 100.00% of requests completed within 1 millisecond Throughput: 78,740.16 requests per second

Based on these results, the tested methods (Set, Get, Incr, etc.) can achieve a processing level of approximately 80,000 requests per second.