1

I understand that there are two ways that a hash collision can occur in Java's HashMap ,

1.hashCode() for Key Object produces same hash value as already produced one ( even if hash bucket is not full yet )

2.Hash Bucket is already full so new Entry has to go at existing index.

In case of Java's HashMap, situation#2 would really be rare due to so large number of allowed entries and automatic resizing ( See My other question )

Am I correct in my understanding?

But for the sake of theoretical knowledge, do programmers or JVM do anything or can do anything to avoid scenario # 2? OR

Is allowing hash-bucket to be of largest possible size and then continous re sizing the only strategy? ( As is being done in case of HashMap ).

I guess, as a programmer , I should be focused in writing a good hasCode() only and not worry about scenario#2 ( since that is already taken care of by API ).

Community
  • 1
  • 1
Sabir Khan
  • 9,826
  • 7
  • 45
  • 98

1 Answers1

1

I think #2 is a special case of #1, it's really the same, as when HashMap decides where to put the new element, it decides not because everything else if full, but because the hashCode is the same as for an element that's already in the map.

I agree, you should focus on the hasCode(), see: Creating a hashCode() Method - Java

Community
  • 1
  • 1
Gavriel
  • 18,880
  • 12
  • 68
  • 105
  • so should `hashCode()` of Key take care of full bucket scenario or `HashMap` does it? I guess, `HashMap` does it. – Sabir Khan Jan 29 '16 at 11:17
  • HashMap does. That's why it's constructors have `initialCapacity` and `loadFactor` parameters, so it knows when to resize – Gavriel Jan 29 '16 at 11:22