1

I am having difficulty wording this problem since this is my first assignment for measuring throughput of a feature.

The problem is when I run a set of tests against an application server, sometimes I get average of 27 seconds turnaround time for each unit of work. And after few seconds, I start getting 10 seconds turnaround time per unit work (while executing same set of tests). How can someone explain this? I am the only person using this server so I cannot blame it on any other testing.

Asad Iqbal
  • 3,241
  • 4
  • 32
  • 52
  • What input sizes are you using? When testing the thoughput of an algorithm, we tend to use large input sizes to help standardize results. – christopher Mar 01 '13 at 23:21
  • http://stackoverflow.com/questions/504103/how-do-i-write-a-correct-micro-benchmark-in-java – Brian Roach Mar 01 '13 at 23:33

1 Answers1

4

Probably the JIT is kicking in after a few seconds, and compiles your code to native code so that it runs faster. There could also be a caching effect going on, where warm (CPU and disk) caches speed up the run.

To get reproducible results when doing performance measurements, it's essential to burn the task in for a while until the metrics stabilize.

Thomas
  • 174,939
  • 50
  • 355
  • 478