I'm having trouble conceptually understanding the fact that the insert and get operations have an amortized running time of O(1) in Java's Hashmap (simplified to a hash table). Note that in my example the table is created at maximum capacity, meaning it will only need to be expanded one time, when the load factor of 75% is met. Get will also never be called on an invalid target.
Amortized running time is calculated via O(total cost)/operation count.
For insert, an element is added to the bucket corresponding to a particular slot in the hash table. As the newest value is always added to the end of the bucket, the values for this are:
- Total cost = find bucket + place at end of bucket, all O(1) time
- Number of operations = find bucket + place at end of bucket
- Therefore, Amortized Running Time = (find bucket + place at end of bucket)/(find bucket + place at end of bucket), or O(1)
The only aspect that confuses me here is if the added element places the table over its load factor, resizing it. What steps would Java perform here, and why would it remain O(1) Amortized Running Time?
For get, the time to find a bucket is constant, and finding a match could take anywhere from 1 operation (target is the first value in the bucket) to n operations (all entries are in the same bucket and the target is at the bottom). In this case:
- Total cost = find bucket + number of steps to find target in bucket, all 1 time unit
- Number of operations = find bucket + number of steps to find target in bucket
- Therefore, Amortized Running Time = (find bucket + number of steps to find target in bucket)/(find bucket + number of steps to find target in bucket), or O(1)
This is my attempt to apply the amortization thought process to these operations, but as I'm fairly new to the concept, I'm not sure whether my logic is correct, and was hoping to get some feedback as to whether a) I'm right or b) where I'm going wrong. Any help is appreciated.