I know that in Java 8 HashMap
was optimized for poorly distributed hashCode
. And in cases when threshold was exceeded it rebuilds nodes in bucket from linked list into tree. Also it is stated that this optimization doesn't work for not comparable keys (at leas performance is not improved). In the example below I put not Comparable
keys into HashMap
import java.util.HashMap;
import java.util.Map;
import java.util.concurrent.TimeUnit;
import java.util.stream.IntStream;
class Main {
public static void main(String[] args) throws InterruptedException {
Map<Key, Integer> map = new HashMap<>();
IntStream.range(0, 15)
.forEach(i -> map.put(new Key(i), i));
// hangs the application to take a Heap Dump
TimeUnit.DAYS.sleep(1);
}
}
final class Key {
private final int i;
public Key(int i) {
this.i = i;
}
@Override
public boolean equals(Object o) {
if (this == o) return true;
if (o == null || getClass() != o.getClass()) return false;
Key key = (Key) o;
return i == key.i;
}
@Override
public int hashCode() {
return 1;
}
}
But Inspecting the Heap Dump shows that nodes was rearrange into Tree.
My question is why nodes is rebuilt into the tree if it will not improve performance and on which criteria in this case nodes is compared to figure out which key should be right node and which left?