In this question, I'd like to question the lore how to test the performance of Java code. The usual approach works along these lines:
long start = System.nanoTime();
for( int i=0; i<SOME_VERY_LARGE_NUMBER; i++) {
...do something...
}
long duration = System.nanoTime() - start;
System.out.println( "Performance: "
+ new BigDecimal( duration ).divide(
new BigDecimal( SOME_VERY_LARGE_NUMBER, 3, RoundingMode.HALF_UP ) ) );
"Optimized" versions move the calls to System.nanoTime()
into the loop, growing the error margin since System.nanoTime()
takes much longer (and is way less predictable in the runtime behavior) than i ++
and the comparison.
My criticism is:
This gives me the average runtime but this value depends on a factors which I'm not really interested in: Like the system load while the test loop was running or jumps when the JIT/GC kicks in.
Wouldn't this approach be (much) better in most cases?
- Run the code to measure often enough to force JIT compilation
- Run the code in a loop and measure the execution times. Remember the smallest values and abort the loop when this value stabilizes.
My rationale is that I usually want to know how fast some code can be (lower bounds). Any code can become arbitrarily slow by external events (mouse movements, interrupts from the graphics card because you have an analog clock on the desktop, swapping, network packets, ...) but most of the time, I just want to know how fast my code can be under perfect circumstances.
It would also make performance measurement much faster since I don't have to run the code for seconds or minutes (to cancel unwanted effects out).
Can someone confirm/debunk this?