4

I am running into what appears to be a connection limit for the YEDIS interface to yugabyte (or maybe an internal rpc connection limit).

This limit is around 800 simultaneous connections. The following throws an error after a while:

java -jar ./yb-sample-apps.jar \
 --workload RedisKeyValue \
 --nodes 127.0.0.1:6379  \
 --nouuid \
 --value_size 256 \
 --num_threads_read 0 \
 --num_threads_write  800  \
 --num_unique_keys 1000000000

The error looks like this:

tablet: f9b5581437774f97979c868e283c628d, num_ops: 1, num_attempts: 5, txn: 00000000-0000-0000-0000-000000000000) passed its deadline 57037.830s (passed: 3.851s

But this seems to run fine indefinetly:

java -jar ./yb-sample-apps.jar \
 --workload RedisKeyValue \
 --nodes 127.0.0.1:6379  \
 --nouuid \
 --value_size 256 \
 --num_threads_read 0 \
 --num_threads_write  500  \
 --num_unique_keys 1000000000

How can I raise the connection limit? Or is this a bug? 800 connections is nowhere near enough for my application. My application maxes out at more like 8,000 simultaneous connections.

As far as I can tell, my ulimit settings are fine:

[root@72c14ca48af1 yugabyte]# ulimit -a
core file size          (blocks, -c) unlimited
data seg size           (kbytes, -d) unlimited
scheduling priority             (-e) 0
file size               (blocks, -f) unlimited
pending signals                 (-i) 29892
max locked memory       (kbytes, -l) 64
max memory size         (kbytes, -m) unlimited
open files                      (-n) 1048576
pipe size            (512 bytes, -p) 8
POSIX message queues     (bytes, -q) 819200
real-time priority              (-r) 0
stack size              (kbytes, -s) 8192
cpu time               (seconds, -t) unlimited
max user processes              (-u) unlimited
virtual memory          (kbytes, -v) unlimited
file locks                      (-x) unlimited
Rob
  • 14,746
  • 28
  • 47
  • 65
Jack Davidson
  • 4,613
  • 2
  • 27
  • 31

1 Answers1

4

Thanks for reporting this issue, and the additional input on the YugaByte slack channel to help isolate the issue.

Turned out there were two issues at play here:

a) When a yb-tserver is launched on its own, it assumes it can use 85% of the system RAM (and this is configurable), but the yb-ctl way of launching a test cluster only gives the yb-tserver process 1GB of RAM by default.

b) For redis connections, the fixed overhead of each connection was 1MB. So at about 8000 connections, this overhead itself would require about 8GB of memory. This is controlled by the redis_rpc_block_size yb-tserver gflag that defaults to 1MB.

Due to these two factors, writes to the system were being rejected with the following error:

I0624 21:32:28.317205  6772 maintenance_manager.cc:341] we have exceeded our soft memory limit (current capacity is 136.82%).  However, there are no ops currently runnable which would free memory.

The following overrides should unblock your workload:

 ./yb-ctl destroy
 ./yb-ctl start --disable_ysql --tserver_flags="redis_rpc_block_size=131072,memory_limit_hard_bytes=6000000000"
 ./yb-ctl setup_redis

The above memory_limit_hard_bytes value of ~6GB assumes that you have a 8GB machine. Note that the yb-master's memory requirements aren't too high.