2

I was testing the performance of two different approaches to handling streams of integers (primitives). To my surprise, the first test case is always slower, no matter the order.

public void intStreamTest() {
    final int[] ints = new int[1000];
    for (int x = 0; x < 1000; x++) {
        ints[x] = new Random().nextInt(1000);
    }
    // T1
    long start = System.currentTimeMillis();
    for (int x = 0; x <= 10000; x++) {
        List<Integer> result =
            IntStream.of(ints).boxed().collect(Collectors.toList());
    }
    System.out.println("T1 " + (System.currentTimeMillis() - start));
    // T2
    long start2 = System.currentTimeMillis();
    for (int x = 0; x <= 10000; x++) {
        List<Integer> result =
            IntStream.of(ints).mapToObj(Integer::valueOf).collect(Collectors.toList());
    }
    System.out.println("T2 " + (System.currentTimeMillis() - start2));
}

Here is an example of the output:

T1 150
T2 95

When I change the order of test cases (swap sections T2 and T1) I get:

T2 153
T1 92

Did I make a mistake? How is it possible that a second test case is always faster (approx by 60%)?

Piotr Niewinski
  • 1,298
  • 2
  • 15
  • 27
  • 4
    Usually this is due to JIT compilation. There are lots of "gotchas" with microbenchmarking like this. I suggest you look into Caliper: https://github.com/google/caliper – Jon Skeet Mar 09 '20 at 09:48
  • Thanks, looks like I have to read more about it, especially. Caliper looks very promising. I also added the warm-up phase for each case and indeed it has affected test output. Unfortunately, I have now a feeling that every benchmark I have ever done could be wrong. – Piotr Niewinski Mar 09 '20 at 11:01

0 Answers0