0

I've been testing java reflection and I found that, when using execute(), the time needed to finish the method varies a lot.

Here is the code that I'm using:

int nTest = 101;
long mean = 0;
long[] execTime = new long[nTest - 1];

for (int i = 0; i < nTest; i++) {
     long tStart = System.nanoTime();
     metodin.invoke(o, parameters);
     long tFinal = System.nanoTime();
     long tDiff = (tFinal - tStart) / 100;

     if (i > 0)               // I discard the first execution just in case it takes a lot of time.
         execTime[i - 1] = tDiff;
}

long totalExecutionTime = 0;

for (int i = 0; i < numbers.length; i++) {
    totalExecutionTime += execTime[i];
    // System.out.println(execTime[i]);
}
mean = totalExecutionTime / execTime.length;

And here's one of the multiple examples where we can see a huge variation in time (print obtained using the commented println):

24
23
20
21
21
[...]
22
22
23
60
23
19
20
13028 // BOOM!!!!!!!
496 // and the low performance continues for a while...
160
115
116
120
114
123
121
115
120
114
114
127
323
40 // returns to normal
27
25
23
18
35

The variation is so big that it makes the mean not reflect the reality of the method. To solve this problem I was executing over 1000 times the method, but it seems that this "BOOM" is kind of cyclic. Why is this happening?

1 Answers1

2

It is universally the case in Java -- at least on the HotSpot JVM -- that code execution times will exhibit variance like this, even if reflection isn't involved. This is due to the JIT compiler -- Java code gets optimized at runtime, finding which methods are performance critical and only optimizing the code that is run the most often within the whole program, based on how it gets used in practice. The point you observed is probably a swap in implementations and then gradually optimizing and improving branch prediction on the new implementation.

This can make Java code faster than precompiled languages that don't have any information on how the code gets run in practice, with the consequence of "warm up time" and complexity in benchmarking. This is in addition to garbage collection, which happens at arbitrary times and can slow down your program while it's running.

Louis Wasserman
  • 191,574
  • 25
  • 345
  • 413