37

When I try to perform a load test using httperf with high request rate, I get the following error:

» httperf --client=0/1 --server=www.xxxxx.com --port=80 --uri=/ --send-buffer=4096 --recv-buffer=16384 --num-conns=200 --rate=30
httperf --client=0/1 --server=staging.truecar.com --port=80 --uri=/ --rate=30 --send-buffer=4096 --recv-buffer=16384 --num-conns=200 --num-calls=1
httperf: warning: open file limit > FD_SETSIZE; limiting max. # of open files to FD_SETSIZE
**Segmentation fault: 11**

The error raises when the "rate" is > 15

Versions:

httperf 0.9.0

OS X 10.7.1

Mike Hemelberg
  • 952
  • 1
  • 9
  • 10
  • I see the same on OSX 10.6.8, with httperf 0.8.1 and 0.9.0 – jches Oct 06 '11 at 07:04
  • I see this, even with the rate set > 1. It seems to run a little longer before segfaulting at 2, but 3 segfaults wicked fast. – Jesse Oct 21 '11 at 19:17
  • Check if you don't run out of memory. – qwertzguy Nov 17 '11 at 13:48
  • 2
    Your system is running out of file descriptors. IIRC, this happens with RPM packages using a way too small `__FD_SETSIZE` (like 1024). Afaik you'll need to recompile the limiting RPM packages (e.g. glibc, Apache, PHP, etc.) to increase `__FD_SETSIZE`, so I'd suggest migrating the question to [sf]. – Jürgen Thelen Dec 02 '11 at 17:33
  • I get this same issue on CentOS 6 x64 running Apache 2.2.15, but not Debian 6 x64 running Nginx 1.2.3, using httperf-0.9.0 on both. Open files limits are the same (1024) on both. – tacotuesday Aug 16 '12 at 22:33

3 Answers3

6

As the warning states, the number of connections to the http server is exceeding the maximum number of allowed open file-descriptors. It's likely that even though httperf is limiting the value to FD_SETSIZE, you're reaching beyond that limit.

You can check the limit value with ulimit -a

$ ulimit -a
core file size          (blocks, -c) 0
data seg size           (kbytes, -d) unlimited
file size               (blocks, -f) unlimited
max locked memory       (kbytes, -l) unlimited
max memory size         (kbytes, -m) unlimited
open files                      (-n) 256
pipe size            (512 bytes, -p) 1
stack size              (kbytes, -s) 8192
cpu time               (seconds, -t) unlimited
max user processes              (-u) 709
virtual memory          (kbytes, -v) unlimited

Try increasing the limit with ulimit -n <n>

$ ulimit -n 2048
$ ulimit -a
core file size          (blocks, -c) 0
data seg size           (kbytes, -d) unlimited
file size               (blocks, -f) unlimited
max locked memory       (kbytes, -l) unlimited
max memory size         (kbytes, -m) unlimited
open files                      (-n) 2048
pipe size            (512 bytes, -p) 1
stack size              (kbytes, -s) 8192
cpu time               (seconds, -t) unlimited
max user processes              (-u) 709
virtual memory          (kbytes, -v) unlimited

This is common practice on large web servers and the like, as a socket is essentially just an open file-descriptor.

ben lemasurier
  • 2,582
  • 4
  • 22
  • 37
  • As @Tom van der Woerdt/@m_pahlevanzadeh pointed out, replace `umlimit` with `limit` if you're using csh rather than bash/ksh – ben lemasurier Jan 24 '12 at 23:20
  • 3
    Thank you for the tips. But this doesn't solve the segmentation fault, and it is most probably not the root cause of the question. Reading the httperf documentation, it is actually aware of available file descriptors. It will log unavailable file descriptors and output them after one run. The program is not meant to crash if you run out of file descriptors. – Overbryd Jun 25 '12 at 19:12
  • 1
    Basho has a [convenient guide](http://wiki.basho.com/Open-Files-Limit.html) for raising the open file limit with steps for Lion. Basically, add `limit maxfiles 16384 32768` to a file called `/etc/launchd.conf` (create it if missing). Reboot. Check new value with `ulimit -a` or `launchctl limit`. I still get a segfault, though. – rud Sep 19 '12 at 08:15
  • Doesn't work. I already have the following set: `open files (-n) 640000` – emirhosseini Sep 20 '20 at 16:53
0

Try to use gdb and use something like:

$ gdb httperf --client=0/1 --server=staging.truecar.com \
--port=80 --uri=/ --rate=30 --send-buffer=4096 \
--recv-buffer=16384 --num-conns=200 --num-calls=1

This will invoke gdb and you should see a (gdb) prompt.

Then: run and enter.

If it'll crash, type bt (backtrace). Investigate and/or share on here.

Till
  • 22,236
  • 4
  • 59
  • 89
  • I have the same problem as the original question. Here is the output of your suggested gdb run: https://gist.github.com/2990517 – Overbryd Jun 25 '12 at 18:57
  • IMHO, this could be another case where your system is running out of file descriptors. The other thing could be bad memory management in httperf. You could try to use [sysbench](http://sysbench.sourceforge.net/) instead. – Till Jun 26 '12 at 12:51
  • Probably this is a problem inside httperf. Sysbench is no use for me, since I want to test a webserver. – Overbryd Jun 27 '12 at 18:53
  • Eventually I ended up using partly siege and mostly tsung. – Overbryd Sep 10 '12 at 16:12
0

ksh and bash are using ulimit, and csh is using the limit command.

Tom van der Woerdt
  • 29,532
  • 7
  • 72
  • 105
PersianGulf
  • 2,845
  • 6
  • 47
  • 67
  • Also use the lsof command to see open files and work this command with the following distro: AIX 5.3, FreeBSD 4.9 for x86-based systems, FreeBSD 7.0 and 8.0 for AMD64-based systems, Linux 2.1.72 and above for x86-based systems and Solaris 9 and 10. @yesterday – PersianGulf Dec 27 '11 at 17:20