Overview
The time complexity for resizing is O(n)
.
It will create a new array with double the capacity and copy over each element from the original array into the new array. This requires a full iteration. Note that this can be optimized internally by the JVM though.
Amortized
Resizing is not needed often. In particular not for every add
call. Hence, why it doubles the internal capacity. This gives room for a whole bunch of subsequent add
-calls.
More formally, this yields the amortized complexity of O(1)
for add
, even though its regular complexity would be O(n)
bound by the resize
.
Details
You can see the source of this here (OpenJDK 17):
private Object[] grow(int minCapacity) {
int oldCapacity = elementData.length;
if (/* ... */) {
int newCapacity = ArraysSupport.newLength(oldCapacity,
minCapacity - oldCapacity, /* minimum growth */
oldCapacity >> 1 /* preferred growth */);
return elementData = Arrays.copyOf(elementData, newCapacity);
} else {
// ...
}
}
The preferred growth is oldCapacity >> 1
, i.e. double the capacity. The actual part that costs you performance and gives you the O(n)
is Arrays.copyOf(...)
.
This method is called primarily from this helper:
private void add(E e, Object[] elementData, int s) {
if (s == elementData.length)
elementData = grow();
elementData[s] = e;
size = s + 1;
}
That is used by the main entry point add(E)
:
public boolean add(E e) {
modCount++;
add(e, elementData, size);
return true;
}