We have a simple unit test as part of our performance test suite which we use to verify that the base system is sane and performs before we even start testing our code. This way we usually verify that a machine is suitable for running actual performance tests.
When we compare Java 6 and Java 7 using this test, Java 7 takes considerably longer to execute! We see an average of 22 seconds for Java 6 and 24 seconds for Java 7. The test only computes fibonacci, so only bytecode execution in a single thread should be relevant here and not I/O or anything else.
Currently we run it with default settings on Windows with or without "-server", with both 32 and 64 bit JVM, all runs indicate a similar degradation for Java 7.
Which tuning options may be suitable here to try to match Java 7 against Java 6?
public class BaseLinePerformance {
@Before
public void setup() throws Exception{
fib(46);
}
@Test
public void testBaseLine() throws Exception {
long start = System.currentTimeMillis();
fib(46);
fib(46);
System.out.println("Time: " + (System.currentTimeMillis() - start));
}
public static void fib(final int n) throws Exception {
for (int i = 0; i < n; i++) {
System.out.println("fib(" + i + ") = " + fib2(i));
}
}
public static int fib2(final int n) {
if (n == 0)
return 0;
else if (n == 1)
return 1;
else
return fib2(n - 2) + fib2(n - 1);
}
}
Update: I have reduced the test to not do any sleeps and followed the other suggestions from How do I write a correct micro-benchmark in Java?, I still see the same difference between Java 7 and Java 6, additional JVM options to print compilation and GC do not show any output during the actual test, only initially compilation information is printed.