I'm using Zaphoyd Websocketpp to creat a websocket server that need to accept very high concurrent connections ( C1M at least) on CentOS. But the server process always get killed by kernel when the number of connections reached about 63k. I see this message in dmesg:
Out of memory: Kill process 5420 (echo_server) score 382 or sacrifice child
Killed process 5420, UID 10545, (echo_server) total-vm:1488192kB, anon-rss:1467524kB, file-rss:32kB
I don't think the kernel will kill the process that only consumes about 1.5GB. So I created a simple program that allocates memory and do some read/write operations. This program was not killed by kernel. It only gets bad_alloc error when memory usage reaches 3.2GB.
I also checked some other parameters but found nothing suspicious:
$ ulimit -a
core file size (blocks, -c) 0
data seg size (kbytes, -d) unlimited
scheduling priority (-e) 0
file size (blocks, -f) unlimited
pending signals (-i) 29712
max locked memory (kbytes, -l) 64
max memory size (kbytes, -m) unlimited
open files (-n) 1000000
pipe size (512 bytes, -p) 8
POSIX message queues (bytes, -q) 819200
real-time priority (-r) 0
stack size (kbytes, -s) 10240
cpu time (seconds, -t) unlimited
max user processes (-u) 29712
virtual memory (kbytes, -v) unlimited
file locks (-x) unlimited
$ cat /proc/sys/fs/nr_open
10485760
$ cat /proc/sys/fs/file-max
1280000
$ cat /proc/sys/fs/file-nr
1536 0 1280000
Can anyone help on this?