As a general rule, the default load factor (.75) offers a good tradeoff between time and space costs. Higher values decrease the space overhead but increase the lookup cost (reflected in most of the operations of the HashMap class, including get and put).
I am not getting how increasing the load factor say to 1, increase the lookup time
For example:- Initial Capacity is 16 and load factor is 1 then resizing to 32 will happen after size reaches to 16 * 1 = 16. Now if i put any new new entry how lookup time will be more in comparison if load factor would have been .75 (In this case hashmap would have resized at size 2 )
As this answer says What is the significance of load factor in HashMap? that lesser the number of free buckets ,higher the chances of collision.
I am not sure how number of free buckets are related to chance of collision.
Per mine understanding, bucket is decided based on hashcode of key object. If it comes out to be same as for some already key object in bucket then only there will be chances of collision otherwise it will go different bucket (out of availablke bucket ). So how come collision is related to free buckets ? Do you mean to say that even if hashcode is different and hashmap is full, then it will try to fit it in existing bucket ?
Its not duplicate of What is the significance of load factor in HashMap?. I am asking the specific point which is not answered in that link