20

We are running a web application and switched from memcached to redis (2.4) for caching. Now we are somewhat disappointed about redis performance. Redis is running on the same server and we use only very simple GET and SET operations. On some requests which make heavy use of cached values we have up to 300 GET requests to redis, but those requests take up to 150ms. We have about 200,000 active keys, and about 1,000 redis requests per second. There is no problem with disk io, ram or cpu. Because of our existing code we can't simply group redis requests together. Memcached has been about 4 times faster. What we like about redis, is that we don't need any cache warming, and could use more advanced datastore features in future. We expected redis to perform similar to memcached. So perhaps we missed something in our configuration, which is basically the default configuration.

Do you know of any best practice for redis performance tuning?

ak2
  • 457
  • 1
  • 5
  • 17
  • Redis is running on the same server as what? The client, or the one that was running memcached? 150ms for a Redis request sounds like you're hitting swap/disk, not memory, or do you mean 150ms for all 300 requests? – Joachim Isaksson May 30 '13 at 16:37
  • It's a web application with about 100 apache requests per second. Redis is running on the same host as the web application itself and the mysql database server, but we plan moving the apache to 3 loadbalanced servers soon. The current server has about 64GB of RAM, redis takes about 100MB. There is enough free ram and no issue with cpu or io. The server doesn't swap to disk. And yes, I mean 150ms for 300 requests, but memcached took only 40ms under the same conditions. – ak2 May 30 '13 at 16:44
  • 1
    Only thing I can think of is if you're using a shared connection to Redis for all requests instead of one per web request, you may have latency problems with Redis, but at 1000 requests per second you should not see that bad latency. Sorry, nothing really helpful from my direction today :) – Joachim Isaksson May 30 '13 at 17:00

3 Answers3

30

First, you may want to read the Redis benchmark page. It provides a good summary of the main points to check to tune Redis.

Even supposing you do not use pipelining, 300 GETs in 150 ms is not that efficient. It means the average latency is 500 us. However, it actually depends on the size of your objects. Larger objects, more latency. On my very old 2 GHz AMD box, I can measure 150 us latencies for small objects (a few bytes).

To quickly check the average latency of the Redis instance, you can use:

$ redis-cli --latency

Be sure to use a recent Redis version (not 2.4) to get this option. Note: 2.4 is quite old now, use Redis 2.6 - compile your own Redis version if needed, it is really straightforward.

To quickly run a benchmark to study latency, you can launch:

$ redis-benchmark -q -n 10000 -c 1 -d average_size_of_your_objects_in_bytes

It runs with a unique connection and no pipelining, so the latency can be deduced from throughput. Try to compare the result of these benchmarks to the figures measured with your application.

There are a number of points you may want to check:

  • Which Redis client library do you use? With which development language? For some scripting languages, you need to install the hiredis module to get an efficient client.
  • Is your machine a VM? On which OS?
  • Are the connections to Redis persistent? (i.e. you are not supposed to connect/disconnect at each HTTP request of your app server).

Why is it better with memcached? Well, a single memcached instance is certainly more scalable, and may be more responsive than a single Redis instance, since it may run on multiple threads. Redis is fast, but single-threaded - the execution of all the commands is serialized. So when a command is on-going for a connection, all the other clients have to wait - a bad latency on a given command will also impacts all the pending commands. Generally, at low throughput, performance are comparable though.

At 1000 q/s (a low throughput by Redis or memcached standards), I would say it is more probable your problem is on client-side (i.e. choice of the client library, connection/disconnection, etc ...), than with Redis server itself.

Finally I should mention that if you generate a number of Redis queries per HTTP request, consider pipelining the commands you send to Redis as far as possible. It is really a key point to develop efficient Redis applications.

If your application servers are on the same box as Redis, you can also use unix domain sockets instead of the TCP loopback to connect to Redis. It slightly improves performance (up to 50% more throughput when pipelining is NOT used).

Didier Spezia
  • 70,911
  • 12
  • 189
  • 154
  • "the execution of all the commands is serialized": would you give more details on this statement? to me it sounds like you are saying that while a command is running no other commands, even on other connections can run. is that how redis achieves command atomicity? – akonsu May 31 '13 at 03:43
  • Yes exactly. See http://stackoverflow.com/questions/10489298/redis-is-single-threaded-then-how-does-it-do-concurrent-i-o/10495458#10495458 – Didier Spezia May 31 '13 at 05:42
1

Check if redis is using OS swap memory. If it is then that will add latency. To find out, search for "Latency induced by swapping" here : http://redis.io/topics/latency

If your server hardware is NUMA capable, better start redis-server with numactl. Don't forget to turn off zone reclaim mode (vm.zone_reclaim_mode=0) in sysctl if you are starting with redis-server with NUMA.

Sooraj
  • 412
  • 4
  • 11
0

Try to script that 300 GET request inside a Lua script. It should be working faster because you save time on touch to TCP/IP stack even though your client code is running locally to Redis.

Dennis Anikin
  • 961
  • 5
  • 7