The most efficient ways (according to my benchmark) are to use the new HashMap.forEach()
method added in Java 8 or HashMap.entrySet().forEach()
.
JMH Benchmark:
@Param({"50", "500", "5000", "50000", "500000"})
int limit;
HashMap<String, Integer> m = new HashMap<>();
public Test() {
}
@Setup(Level.Trial)
public void setup(){
m = new HashMap<>(m);
for(int i = 0; i < limit; i++){
m.put(i + "", i);
}
}
int i;
@Benchmark
public int forEach(Blackhole b){
i = 0;
m.forEach((k, v) -> { i += k.length() + v; });
return i;
}
@Benchmark
public int keys(Blackhole b){
i = 0;
for(String key : m.keySet()){ i += key.length() + m.get(key); }
return i;
}
@Benchmark
public int entries(Blackhole b){
i = 0;
for (Map.Entry<String, Integer> entry : m.entrySet()){ i += entry.getKey().length() + entry.getValue(); }
return i;
}
@Benchmark
public int keysForEach(Blackhole b){
i = 0;
m.keySet().forEach(key -> { i += key.length() + m.get(key); });
return i;
}
@Benchmark
public int entriesForEach(Blackhole b){
i = 0;
m.entrySet().forEach(entry -> { i += entry.getKey().length() + entry.getValue(); });
return i;
}
public static void main(String[] args) throws RunnerException {
Options opt = new OptionsBuilder()
.include(Test.class.getSimpleName())
.forks(1)
.warmupIterations(25)
.measurementIterations(25)
.measurementTime(TimeValue.milliseconds(1000))
.warmupTime(TimeValue.milliseconds(1000))
.timeUnit(TimeUnit.MICROSECONDS)
.mode(Mode.AverageTime)
.build();
new Runner(opt).run();
}
Results:
Benchmark (limit) Mode Cnt Score Error Units
Test.entries 50 avgt 25 0.282 ± 0.037 us/op
Test.entries 500 avgt 25 2.792 ± 0.080 us/op
Test.entries 5000 avgt 25 29.986 ± 0.256 us/op
Test.entries 50000 avgt 25 1070.218 ± 5.230 us/op
Test.entries 500000 avgt 25 8625.096 ± 24.621 us/op
Test.entriesForEach 50 avgt 25 0.261 ± 0.008 us/op
Test.entriesForEach 500 avgt 25 2.891 ± 0.007 us/op
Test.entriesForEach 5000 avgt 25 31.667 ± 1.404 us/op
Test.entriesForEach 50000 avgt 25 664.416 ± 6.149 us/op
Test.entriesForEach 500000 avgt 25 5337.642 ± 91.186 us/op
Test.forEach 50 avgt 25 0.286 ± 0.001 us/op
Test.forEach 500 avgt 25 2.847 ± 0.009 us/op
Test.forEach 5000 avgt 25 30.923 ± 0.140 us/op
Test.forEach 50000 avgt 25 670.322 ± 7.532 us/op
Test.forEach 500000 avgt 25 5450.093 ± 62.384 us/op
Test.keys 50 avgt 25 0.453 ± 0.003 us/op
Test.keys 500 avgt 25 5.045 ± 0.060 us/op
Test.keys 5000 avgt 25 58.485 ± 3.687 us/op
Test.keys 50000 avgt 25 1504.207 ± 87.955 us/op
Test.keys 500000 avgt 25 10452.425 ± 28.641 us/op
Test.keysForEach 50 avgt 25 0.567 ± 0.025 us/op
Test.keysForEach 500 avgt 25 5.743 ± 0.054 us/op
Test.keysForEach 5000 avgt 25 61.234 ± 0.171 us/op
Test.keysForEach 50000 avgt 25 1142.416 ± 3.494 us/op
Test.keysForEach 500000 avgt 25 8622.734 ± 40.842 us/op
As you can see, HashMap.forEach
and HashMap.entrySet().forEach()
perform best for large maps and are joined by the for loop on the entrySet()
for best performance on small maps.
The reason the keys methods are slower is probably because they have to lookup the value again for each entry, while the other methods just need to read a field in an object they already have to get the value. The reason that I would expect the iterator methods to be slower is that they are doing external iteration, which requires two method calls (hasNext
and next
) for each element as well as storing the iteration state in the iterator object, while the internal iteration done by forEach
requires just one method call to accept
.
You should profile on your target hardware with your target data and doing your target action in the loops to get a more accurate result.