You can always look at the source code.
And there you will see that HashMap has an array of buckets:
transient Entry[] table;
Every bucket is essentially a linked list:
static class Entry<K,V> implements Map.Entry<K,V> {
final K key;
V value;
Entry<K,V> next;
final int hash;
The array gives you constant-time access to the bucket for a given hash code, and then you have to loop through that list (which hopefully does not have more than one or two entries):
final Entry<K,V> getEntry(Object key) {
int hash = (key == null) ? 0 : hash(key.hashCode());
for (Entry<K,V> e = table[indexFor(hash, table.length)];
e != null;
e = e.next) {
Object k;
if (e.hash == hash &&
((k = e.key) == key || (key != null && key.equals(k))))
return e;
}
return null;
}
When an element with a different hashCode is added to a HashSet, a new got to be added right?
When an element with the same hashCode as an existing one is added, it will go into the same bucket (at the end of a linked list).
When an element with a new hashCode is added, it may or may not go to a different bucket (because you have way more hashCodes than buckets).
All buckets are created in advance when the Map is sized. If the capacity limit is reached, it is resized with more buckets and everything gets put into new buckets.
To what data structure would this new bucket be added?
Buckets are not added. There is a fixed array of buckets. When you need more capacity, the whole structure is rebuilt with a bigger array.
Does it again resort to some sort of array and resizes that each time a new element is added thus making the addition and deletion into the HashSet O(n) complex?
Not each time. Ideally never. Only when you miscalculated the capacity and end up needing more. Then it gets expensive, as all is copied to a new array. This process is essentially the same as with ArrayList.