3

I ran a performance test run for 500VU by using "jp@gc - Stepping Thread Group". I noticed that, right from 200VU-500VU Load, the hits/sec was 20-25 consistently for 25min till the end of the run, error 0.04%.

I know that I could control the hits/sec by using limit RPS and constant throughput timer and I didn't apply or enable.

My questions are
1. was the run Good or Bad? 2. How should be the hits/sec for 500VU load? 3. Was the hits/sec are determined by Blazemeter engine based upon its efficiency?

Das Prakash
  • 426
  • 1
  • 7
  • 16

2 Answers2

1
  1. Make sure you correctly configure Stepping Thread Group
  2. If you have the same throughput for 200 and 500 VU it is no good. On ideal system throughput for 500 VU should be 2.5 times higher. If you are uncertain whether it is your application or BlazeMeter engine(s) to blame, you can check BlazeMeter's instances health during the test time frame on Engine Health tab of your load test report.
  3. As far as I'm aware, BlazeMeter relies on JMeter throughput calculating algorithms. According to JMeter glossary

    Throughput is calculated as requests/unit of time. The time is calculated from the start of the first sample to the end of the last sample. This includes any intervals between samples, as it is supposed to represent the load on the server. The formula is: Throughput = (number of requests) / (total time).

Dmitri T
  • 159,985
  • 5
  • 83
  • 133
  • Thanks DMitri. As you said I noticed "Engine Health" in Blazemeter, here are the metrics; CPU:98% Connections:907 Memory:49% Network:257KB/sec. May be due to this the engine couldn't able to serve more hits/sec? – Das Prakash Oct 11 '16 at 12:54
  • 98% is quite high, however given it is less than 100% it doesn't indicate the CPU being a bottleneck. Try adding more engines and see if the throughput increases or not. – Dmitri T Oct 11 '16 at 13:36
1

Hits are not always the best measurement of throughput. Here is why: The number of hits can be drastically altered by settings on the servers for applications under test related to the management of cached items. An example: Say you have a very complex page with 100 objects on the page. Only the top level page is dynamic in nature, with the rest of the items as page components such as images, style sheets, fonts, javascript files, etc,.... In a model where no cache settings are in place you will find that all 100 elements have to be requested, each generating a "hit" in both reporting and the server stats. These will show up in your hits/second model.

Now optimize the cache settings where some information is cached at the client for a severely long period ( logo images and fonts for a year ), some for the build interval term of one week (resident in the CDN or client) and only the dynamic top level HTML remains uncached. In this model there is only one hit generated at the server except for the period of time immediately after the build is deployed when the CDN is being seeded for the majority of users. Occasionally you will have a new CDN node come into play with users in a different geographic area, but after the first user seeds the cache the remainder are pulling from CDN and then cached at the client. IN this case your effective hits per second drop tremendously at both CDN and Origin servers, especially where you have return users.

Food for thought.

James Pulley
  • 5,606
  • 1
  • 14
  • 14