48

I started writing some basic tests in JMeter and was surprised that the measurements are so different from those from Apache ab.

I have a gigabit LAN connecting an Intel i7 server running Nginx and an i5 test machine running JMeter or ab. Initially, I am simply testing the out-of-the box Nginx home page response rate.

ab -c 1 -n 100 http://testserver.local/

gives

Document Path:          /
Document Length:        151 bytes

Concurrency Level:      1
Time taken for tests:   0.078 seconds
Complete requests:      100
Failed requests:        0
Write errors:           0
Total transferred:      38400 bytes
HTML transferred:       15100 bytes
Requests per second:    1280.77 [#/sec] (mean)
Time per request:       0.781 [ms] (mean)
Time per request:       0.781 [ms] (mean, across all concurrent requests)
Transfer rate:          480.29 [Kbytes/sec] received

This result is consistently reproducible, +/- a few percent.


In JMeter, I have a 1-user 100-loop thread group containing:

  • an HTTP header manager setting Accept-Encoding: gzip
  • an HTTP Get / sampler
  • a summary report listener

With only 100 samples, this gives wildly inconsistent results each time I run it. But the most startling fact is that the throughput is reported as low as 40 requests per second (not 1280). The highest recorded rate was 1030, and this was achieved only when I increased to 10,000 samples.

Am I right in thinking that JMeter is the wrong tool for simple load tests because its overheads are too high to allow accurate measurements?

Greg Dubicki
  • 5,983
  • 3
  • 55
  • 68
Rick-777
  • 9,714
  • 5
  • 34
  • 50

3 Answers3

70

Jmeter tells you how long each request actually took. AB just does some very basic math to get the overall average. So, the direct answer to your question is that jmeter gets it right and ab just makes a rough guess by giving you the mean across everything.

But, sure, if you put the two tools side by side and rate them for speed then it is clearly the case that ab is going to out perform jmeter. Jmeter just does more, it records more data and is processing more logic so it takes longer to turn around a single request. The simple fact is that Jmeter is a fully featured load testing tool, AB is, well, not.

The thing is, the aim of a load testing tool is not to be the fastest kid on the block, instead it is about being able to build a realistic representation of the sort of load your app might be hit with when it goes live. In this respect jmeter wins hands down, so it really depends on what your requirements are. If you just want to generate as many requests as possible using the least amount of hardware then ab is a nice choice but if you want to build a representative test, with transactional journeys, conditional logic and all sorts of other useful stuff, then jmeter is the way to go. Think of it like this: they are both Apache projects but AB was, I think, designed to test the apache web server, JMeter, however, was designed to test Tomcat.

Now, I'm guessing that jmeter was producing inconsistent results because it was hitting a limit on the machine it was running on. I'm betting you were running in GUI mode and had at least one listener active, like this you are asking the tool to do a lot. If you need a high rate of requests then Jmeter has a lean and mean mode. Typically, for large volumes the best practice is to execute tests at the command line with very few listeners; there's lots of info about this subject on the apache jmeter site.

Another point you should consider, if you're really getting into load testing, is that in order to really get benefit from this sort of thing you need to first decide what sort of load you need your site to support and only then should you design a test that represents this. This is achieved using pacing and simulated wait times. The problem with telling a thread that is should just go away and run as fast as it possibly can is that it will iterate as fast as its local conditions allow it to, but there will always be something that puts the breaks on, even ab is limited; no matter how lightweight a tool is it still does something. But if you pace your requests then you remove this problem and as a rather useful added bonus you end up with consistency between runs and between builds of the code, so even if your server speeds up or slows down (with changes to the code base) your test will still make the same rate of requests - which is pretty useful for benchmarking.

If you want to take JMeter further then have a look at the Constant Throughput Timer and then use multiple threads to build the level of traffic you need to represent.

Oliver Lloyd
  • 4,936
  • 7
  • 33
  • 55
  • 2
    ApacheBench (which I agree just does "basic math") may produce less data, but that does that mean it is incapable of computing a mean correctly. The OP reports an average of around 1280 requests per second with `ab`. The highest he reports from JMeter is 1030. While I agree with the various benefits you list about JMeter, I cannot ignore the obvious fact that JMeter is not saturating the web server as much as ApacheBench. For this reason, I find your assertion that "jmeter gets it right" suspicious. If JMeter got it right, it should be producing at least a comparable mean. –  Jan 10 '16 at 06:07
  • 1
    Hey Tom, actually I never said the mean from ab was not correct, only that it is just that, the mathematical mean, and not a more detailed breakdown of the results such as JMeter gives. Re. your other points about who can produce the most requests per second, can I refer you to my 3rd para about the fastest kid on the block, but basically the OP is confused because he hasn't factored in the limits of his test hardware. Thanks. Oliver – Oliver Lloyd Jan 11 '16 at 10:22
7

In your setup, JMeter is saturating itself faster than it can saturate your web server.

You are running a very optimized C web server on superior hardware and bench-marking it with a relatively heavy Java app on lesser hardware. Optimized C machine code will (probably) always be faster than Java bytecode. JMeter is not able to keep up with Nginx and is therefore giving you strange results as it hits hardware limitations. Java does lots of nice things in the background that manage hardware resources, but also create unpredictable behavior at extreme resource usage. ApacheBench, on the other hand, is a light-enough C program that it can saturate the server and can produce consistent results because it has excess capacity after saturating your web server.

JMeter is great for bench-marking heavy-er dynamic applications that need some time to process requests. All the extra data it provides helps with web apps like that. When you're dealing with static file serving (just about the fastest thing a web server can do) on highly optimized web servers, you need a tool fast enough to keep up.

2

As it already stated in first answer, the keyword "requirements". JMeter is better choice for testing server that serves web pages. For example, it can send a sequence of requests, generating different request for each sequence, parse HTML responses and load content of images and scripts from the HTML. AB is be better choice for REST API testing, where you need that server will response as fast as possible and serve as many request as possible, there is no connection between two subsequent request etc. So, AB is indeed able to generate more requests than JMeter vs same server from same client machine.

Bernard Vander Beken
  • 4,848
  • 5
  • 54
  • 76
Igor Grinfeld
  • 568
  • 2
  • 9
  • 21