A few hours ago I answered another Stack Overflow question, and it gave a very surprising result. The answer can be found here. The answer was/is partially wrong, however I feel focused on byte addition.
Strictly, it actually is byte-to-long addition.
This is the benchmark code I have been using:
public class ByteAdditionBenchmark {
private void start() {
int[] sizes = {
700_000,
1_000,
10_000,
25_000,
50_000,
100_000,
200_000,
300_000,
400_000,
500_000,
600_000,
700_000,
};
for (int size : sizes) {
List<byte[]> arrays = createByteArrays(size);
//Warmup
arrays.forEach(this::byteArrayCheck);
benchmark(arrays, this::byteArrayCheck, "byteArrayCheck");
}
}
private void benchmark(final List<byte[]> arrays, final Consumer<byte[]> method, final String name) {
long start = System.nanoTime();
arrays.forEach(method);
long end = System.nanoTime();
double nanosecondsPerIteration = (end - start) * 1d / arrays.size();
System.out.println("Benchmark: " + name + " / iterations: " + arrays.size() + " / time per iteration: " + nanosecondsPerIteration + " ns");
}
private List<byte[]> createByteArrays(final int amount) {
Random random = new Random();
List<byte[]> resultList = new ArrayList<>();
for (int i = 0; i < amount; i++) {
byte[] byteArray = new byte[4096];
byteArray[random.nextInt(4096)] = 1;
resultList.add(byteArray);
}
return resultList;
}
private boolean byteArrayCheck(final byte[] array) {
long sum = 0L;
for (byte b : array) {
sum += b;
}
return (sum == 0);
}
public static void main(String[] args) {
new ByteAdditionBenchmark().start();
}
}
And this are the results I have been getting:
Benchmark: byteArrayCheck / iterations: 700000 / time per iteration: 50.26538857142857 ns
Benchmark: byteArrayCheck / iterations: 1000 / time per iteration: 20.12 ns
Benchmark: byteArrayCheck / iterations: 10000 / time per iteration: 9.1289 ns
Benchmark: byteArrayCheck / iterations: 25000 / time per iteration: 10.02972 ns
Benchmark: byteArrayCheck / iterations: 50000 / time per iteration: 9.04478 ns
Benchmark: byteArrayCheck / iterations: 100000 / time per iteration: 18.44992 ns
Benchmark: byteArrayCheck / iterations: 200000 / time per iteration: 15.48304 ns
Benchmark: byteArrayCheck / iterations: 300000 / time per iteration: 15.806353333333334 ns
Benchmark: byteArrayCheck / iterations: 400000 / time per iteration: 16.923685 ns
Benchmark: byteArrayCheck / iterations: 500000 / time per iteration: 16.131066 ns
Benchmark: byteArrayCheck / iterations: 600000 / time per iteration: 16.435461666666665 ns
Benchmark: byteArrayCheck / iterations: 700000 / time per iteration: 17.107615714285714 ns
To my knowledge, the JVM is already fully warmed up after the first 700000 iterations before it starts to spit out benchmarking data.
How can it then be that, despite the warmup, the performance is still unpredictable? As almost directly after the warmup byte addition gets blazingly fast, but after that it seems to converge again to a nominal 16 ns per addition again.
The tests have been ran on a PC with an Intel i7 3770 stock-clocked and 16 GB of RAM, hence I can't go much beyond 700000 iterations. It is running on Windows 8.1 64-bit if that matters.
It turns out that the JIT was optimizing everything away, as per raphw's suggestion.
Therefore I replaced the benchmark method with the following:
private void benchmark(final List<byte[]> arrays, final Predicate<byte[]> method, final String name) {
long start = System.nanoTime();
boolean someUnrelatedResult = false;
for (byte[] array : arrays) {
someUnrelatedResult |= method.test(array);
}
long end = System.nanoTime();
double nanosecondsPerIteration = (end - start) * 1d / arrays.size();
System.out.println("Result: " + someUnrelatedResult);
System.out.println("Benchmark: " + name + " / iterations: " + arrays.size() + " / time per iteration: " + nanosecondsPerIteration + "ns");
}
This will ensure that it cannot be optimized away and test results also show it (omitted the result print for clarity):
Benchmark: byteArrayCheck / iterations: 700000 / time per iteration: 1658.2627914285715 ns
Benchmark: byteArrayCheck / iterations: 1000 / time per iteration: 1241.706 ns
Benchmark: byteArrayCheck / iterations: 10000 / time per iteration: 1215.941 ns
Benchmark: byteArrayCheck / iterations: 25000 / time per iteration: 1332.94656 ns
Benchmark: byteArrayCheck / iterations: 50000 / time per iteration: 1456.0361 ns
Benchmark: byteArrayCheck / iterations: 100000 / time per iteration: 1753.26777 ns
Benchmark: byteArrayCheck / iterations: 200000 / time per iteration: 1756.93283 ns
Benchmark: byteArrayCheck / iterations: 300000 / time per iteration: 1762.9992266666666 ns
Benchmark: byteArrayCheck / iterations: 400000 / time per iteration: 1806.854815 ns
Benchmark: byteArrayCheck / iterations: 500000 / time per iteration: 1784.09091 ns
Benchmark: byteArrayCheck / iterations: 600000 / time per iteration: 1804.6096366666666 ns
Benchmark: byteArrayCheck / iterations: 700000 / time per iteration: 1811.0597585714286 ns
I'd say that these results look a lot more convincing with respect to the computation time. However, my question still stands. With repeated tests at random times the same pattern remains that benchmarks with a low number of iterations are faster than those with more iterations, though they do seem to stabilize at 100,000 iterations or somewhere lower.
What is the explanation?