I heard, that load factor in HashMap reinstates the buckets to other location and it's better to keep it at 0.75, so that when the size touches 0.75*present capacity the array of buckets reallocates to twice the present capacity
For instance, we have capacity 16; the array reallocates when the size become 16*0.75 = 12. At this point, we are creating extra 16 elements even before the array touches 16, because it's memory inefficient.
If it's time efficient, how it becomes or are there any trade offs to use load factor?