The HashMap
implementation allows you to set the load factor. This design decision gives the user of the class some measure of control over the conditions under which the underlying data structure is resized.
The default load factor value of 0.75
was likely chosen as a reasonable balance between memory usage and map performance (determined by collision rate and resize overhead).
For any given instance of HashMap
, you get to choose the appropriate load factor for your particular situation. You need to consider the relative importance of a small memory footprint, how performance sensitive you are for lookups, and how performance sensitive you are for put's (put that causes the map to be rebuilt can be very slow).
As an aside, your concept of a "full" HashMap is a little skewed. The implementation handles an arbitrary number of collisions just fine (although there is a performance cost to collisions). You could use a HashMap with a load factor of 1 billion and it would (probably) never grow beyond a capacity of 16.
There is no problem with setting load factor to 1.0, which would result in a rehash operation when you add the 17th element to a default-sized HashMap. Compared to the default of 0.75, you will use a little less space, do fewer rehashes, and have a few more collisions (and thus searching using equals()
in a linked list).