The memory in a std::vector
is required to be contiguous, so it's typically represented as an array.
Your question about the complexity of the operations on a std::vector
is a good one - I remember wondering this myself when I first started programming. If you append an element to a std::vector
, then it may have to perform a resize operation and copy over all the existing elements to a new array. This will take time O(n) in the worst case. However, the amortized cost of appending an element is O(1). By this, we mean that the total cost of any sequence of n appends to a std::vector
is always O(n). The intuition behind this is that the std::vector
usually overallocates space in its array, leaving a lot of free slots for elements to be inserted into without a reallocation. As a result, most of the appends will take time O(1) even though every now and then you'll have one that takes time O(n).
That said, the cost of performing an insertion elsewhere in a std::vector
will be O(n), because you may have to shift everything down.
You also asked why this is, if it defeats the purpose of having a dynamic array. Even if the std::vector
just acted like a managed array, it's still a win over raw arrays. The std::vector
knows its size, can do bounds-checking (with at
), is an actual object (unlike an array), and doesn't decay to a pointer. These extra features - coupled with the extra logic to make appends work quickly - are almost always worth it.