Faulty benchmarking. The non exhaustive list of what is wrong:
- No proper warmup: single shot measurements are almost always wrong;
- Mixing several codepaths in the single method: we probably start compiling the method with the execution data available only for the first loop in the method;
- Sources are predictable: should the loop compile, we can actually predict the result;
- Results are dead-code eliminated: should the loop compile, we can throw the loop away
Take your time listening to these talks, and going through these samples.
This is how you do it arguably correct with jmh
:
@OutputTimeUnit(TimeUnit.NANOSECONDS)
@BenchmarkMode(Mode.AverageTime)
@Warmup(iterations = 3, time = 1)
@Measurement(iterations = 3, time = 1)
@Fork(3)
@State(Scope.Thread)
public class EnhancedFor {
private static final int SIZE = 100;
private List<Integer> list;
@Setup
public void setup() {
list = new ArrayList<Integer>(SIZE);
}
@GenerateMicroBenchmark
public int enhanced() {
int s = 0;
for (int i : list) {
s += i;
}
return s;
}
@GenerateMicroBenchmark
public int indexed() {
int s = 0;
for (int i = 0; i < list.size(); i++) {
s += list.get(i);
}
return s;
}
@GenerateMicroBenchmark
public void enhanced_indi(BlackHole bh) {
for (int i : list) {
bh.consume(i);
}
}
@GenerateMicroBenchmark
public void indexed_indi(BlackHole bh) {
for (int i = 0; i < list.size(); i++) {
bh.consume(list.get(i));
}
}
}
...which yields something along the lines of:
Benchmark Mode Samples Mean Mean error Units
o.s.EnhancedFor.enhanced avgt 9 8.162 0.057 ns/op
o.s.EnhancedFor.enhanced_indi avgt 9 7.600 0.067 ns/op
o.s.EnhancedFor.indexed avgt 9 2.226 0.091 ns/op
o.s.EnhancedFor.indexed_indi avgt 9 2.116 0.064 ns/op
Now that's a tiny difference between enhanced and indexed loops, and that difference is naively explained by taking the different code paths to access the backing storage. However, the explanation is actually much simpler: OP FORGOT TO POPULATE THE LIST, which means the loop bodies ARE NEVER EVER EXECUTED, and the benchmark is actually measuring the cost of size()
vs iterator()
!
Fixing that:
@Setup
public void setup() {
list = new ArrayList<Integer>(SIZE);
for (int c = 0; c < SIZE; c++) {
list.add(c);
}
}
...yields then:
Benchmark Mode Samples Mean Mean error Units
o.s.EnhancedFor.enhanced avgt 9 171.154 25.892 ns/op
o.s.EnhancedFor.enhanced_indi avgt 9 384.192 6.856 ns/op
o.s.EnhancedFor.indexed avgt 9 148.679 1.357 ns/op
o.s.EnhancedFor.indexed_indi avgt 9 465.684 0.860 ns/op
Note the differences are really minute even on the nano-scale, and the non-trivial loop bodies will consume the difference, if any. The differences here can be explained by how lucky we are in inlining get()
and Iterator
methods, and the optimizations that we could enjoy after those inlinings.
Note the indi_*
tests, which negate down the loop unrolling optimizations. Even though indexed
enjoys better performance while successfully unrolled, but it is the opposite when the unrolling is broken!
With the headlines like that, the difference between indexed
and enhanced
is nothing more than of academic interest. Figuring out the exact generated code -XX:+PrintAssembly
for all the cases is left as exercise to the reader :)