1

My problem is the same as this question and this question

I basicly want trying to run httperf with 10000 connection in parallel like this [httperf --uri / --server 192.168.1.2 --port 8080 --num-conns=500000 --rate 10000]

I'm running it on Ubuntu 14.04.

First I raised the system file descriptor limit, this is what is configured in my SO now:

$ ulimit -a -S
core file size          (blocks, -c) 0
data seg size           (kbytes, -d) unlimited
scheduling priority             (-e) 0
file size               (blocks, -f) unlimited
pending signals                 (-i) 31348
max locked memory       (kbytes, -l) 64
max memory size         (kbytes, -m) unlimited
open files                      (-n) 65530
pipe size            (512 bytes, -p) 8
POSIX message queues     (bytes, -q) 819200
real-time priority              (-r) 0
stack size              (kbytes, -s) 8192
cpu time               (seconds, -t) unlimited
max user processes              (-u) 31348
virtual memory          (kbytes, -v) unlimited
file locks                      (-x) unlimited


$ulimit -a -H                                                                
core file size          (blocks, -c) unlimited                                 
data seg size           (kbytes, -d) unlimited                                 
scheduling priority             (-e) 0                                         
file size               (blocks, -f) unlimited                                 
pending signals                 (-i) 31348                                     
max locked memory       (kbytes, -l) 64                                        
max memory size         (kbytes, -m) unlimited                                 
open files                      (-n) 65530                                     
pipe size            (512 bytes, -p) 8                                         
POSIX message queues     (bytes, -q) 819200                                    
real-time priority              (-r) 0                                         
stack size              (kbytes, -s) unlimited                                 
cpu time               (seconds, -t) unlimited                                 
max user processes              (-u) 31348                                     
virtual memory          (kbytes, -v) unlimited                                 
file locks                      (-x) unlimited        

I tried to compile the HEAD version from github repository, but it seems like completly unstable.

I try also the 0.9.0 version modified limit(changed /usr/include/x86_64-linux-gnu/bits/typesizes.h to unlock the FD_SETSIZE 1024) like others questions answers suggest to do. After recompile the httperf it keeps returning the same error:

*** buffer overflow detected ***: ./httperf terminated
======= Backtrace: =========
/lib/x86_64-linux-gnu/libc.so.6(+0x73f1f)[0x7fdca440ef1f]
/lib/x86_64-linux-gnu/libc.so.6(__fortify_fail+0x5c)[0x7fdca44a682c]
/lib/x86_64-linux-gnu/libc.so.6(+0x10a6f0)[0x7fdca44a56f0]
/lib/x86_64-linux-gnu/libc.so.6(+0x10b777)[0x7fdca44a6777]
./httperf[0x403c69]
./httperf[0x4047e7]
./httperf[0x4088df]
./httperf[0x408d2e]
./httperf[0x4071df]
./httperf[0x40730b]
./httperf[0x406791]
./httperf[0x405e0e]
./httperf[0x409afd]
./httperf[0x406022]
./httperf[0x404c1f]
./httperf[0x4024ac]
/lib/x86_64-linux-gnu/libc.so.6(__libc_start_main+0xf5)[0x7fdca43bcec5]
./httperf[0x40358b]
======= Memory map: ========
00400000-00410000 r-xp 00000000 08:05 265276                             
0060f000-00610000 r--p 0000f000 08:05 265276                             
00610000-00611000 rw-p 00010000 08:05 265276                             
00611000-0068a000 rw-p 00000000 00:00 0 
019da000-01c8f000 rw-p 00000000 00:00 0                                  [heap]
7fdca4185000-7fdca419b000 r-xp 00000000 08:06 3277773                    /lib/x86_64-linux-gnu/libgcc_s.so.1
7fdca419b000-7fdca439a000 ---p 00016000 08:06 3277773                    /lib/x86_64-linux-gnu/libgcc_s.so.1
7fdca439a000-7fdca439b000 rw-p 00015000 08:06 3277773                    /lib/x86_64-linux-gnu/libgcc_s.so.1
7fdca439b000-7fdca4556000 r-xp 00000000 08:06 3279540                    /lib/x86_64-linux-gnu/libc-2.19.so
7fdca4556000-7fdca4756000 ---p 001bb000 08:06 3279540                    /lib/x86_64-linux-gnu/libc-2.19.so
7fdca4756000-7fdca475a000 r--p 001bb000 08:06 3279540                    /lib/x86_64-linux-gnu/libc-2.19.so
7fdca475a000-7fdca475c000 rw-p 001bf000 08:06 3279540                    /lib/x86_64-linux-gnu/libc-2.19.so
7fdca475c000-7fdca4761000 rw-p 00000000 00:00 0 
7fdca4761000-7fdca4866000 r-xp 00000000 08:06 3279556                    /lib/x86_64-linux-gnu/libm-2.19.so
7fdca4866000-7fdca4a65000 ---p 00105000 08:06 3279556                    /lib/x86_64-linux-gnu/libm-2.19.so
7fdca4a65000-7fdca4a66000 r--p 00104000 08:06 3279556                    /lib/x86_64-linux-gnu/libm-2.19.so
7fdca4a66000-7fdca4a67000 rw-p 00105000 08:06 3279556                    /lib/x86_64-linux-gnu/libm-2.19.so
7fdca4a67000-7fdca4a8a000 r-xp 00000000 08:06 3279536                    /lib/x86_64-linux-gnu/ld-2.19.so
7fdca4c63000-7fdca4c66000 rw-p 00000000 00:00 0 
7fdca4c85000-7fdca4c89000 rw-p 00000000 00:00 0 
7fdca4c89000-7fdca4c8a000 r--p 00022000 08:06 3279536                    /lib/x86_64-linux-gnu/ld-2.19.so
7fdca4c8a000-7fdca4c8b000 rw-p 00023000 08:06 3279536                    /lib/x86_64-linux-gnu/ld-2.19.so
7fdca4c8b000-7fdca4c8c000 rw-p 00000000 00:00 0 
7ffff050b000-7ffff052c000 rw-p 00000000 00:00 0                          [stack]
7ffff05fe000-7ffff0600000 r-xp 00000000 00:00 0                          [vdso]
ffffffffff600000-ffffffffff601000 r-xp 00000000 00:00 0                  [vsyscall]

I'm not that familliar with low level syscall such as select, but as far as I can tell htperf 0.9.0 use select to handle socket events and this syscall is limited by a hardcoded 1024 size of file descriptor limit. So you guys have any idea what am I doing wrong? How can I unlock the 1024 limit?

Community
  • 1
  • 1
Thiago
  • 65
  • 6

1 Answers1

0

You may not want to use 10K descriptors in a single process. If you decide to do it, you will probably want to split the handling up so that a single call to select() is not handling all 10K descriptors (or performance will, to use a descriptive technical term, suck). See Wikipedia on the C10K Problem or the SO tag — which this question is already tagged with, so you are at least aware of the classification.

You need to look at ulimit -a -H vs ulimit -a -S to see how much of various resources you have (or replace -a with -n to get 'open files' aka 'file descriptors'). If you have a hard limit less than 10K, you are into kernel recompilation, or at least finding the source of that upper limit in the configuration. If the hard limit is bigger, you can override the limit with ulimit at the command line, or with the POSIX getrlimit() and setrlimit() functions and RLIMIT_NOFILE.

Jonathan Leffler
  • 730,956
  • 141
  • 904
  • 1,278
  • Spliting isn't a bad idea, unfortunately httperf doesn't support mutiprocessing which would do the job. At httperf documentation is very explicit that it must not run two httperf processes at he same time. – Thiago Apr 09 '15 at 03:40
  • I was thinking of multiple threads, but ther are probably constraints on that too. It takes time to scan over 10 k file descriptors, both in the kernel and in the process. Be cautious. – Jonathan Leffler Apr 09 '15 at 03:43