The description at cppreference (like the standard) doesn't provide complexity guarantees for the destructor of std::vector
but allows estimating worst-case (or best-case in some cases) execution times. It is necessary to make a few assumptions though (i.e. describe particular cases of interest).
If n
is the number of elements and the element type has a destructor, then there will be n
calls of the destructor. The complexity of that will therefore be O(n)*O(d)
or to O(n*d)
where O(d)
describes the complexity of the destructor call of the element.
That will represent a worst-case complexity for std::vector
(destroying the elements) in general.
For types with no destructor (int
, raw pointers, etc) there is nothing preventing the implementation from recognising that the destruction of an element is a non-op, and therefore not implementing a loop that executes the destructor for each element. In that case, the destruction of elements may be reduced to nothing or (in complexity terms O(1)
). That represents a best case complexity for destruction of elements.
The other part of the destructor of std::vector
is releasing the storage used by the elements (i.e. the raw memory). That will only happen once, but the time will be implementation-defined. Practically, some implementations may do it in constant time (e.g. a fixed number of operations independently of the size of the raw memory) or in linear time (the time to release a chunk of memory proportional to the size of that memory). That suggests a WCET complexity of O(c)
, where c
is the capacity of the vector, not the size. A best case would (assuming deallocation is constant time) be O(1)
. [It's also not impossible that some implementations may be worse than that - but I'll leave that alone].
The implementation is free to do other things on deallocation that may affect execution time. For example, a security-focused implementation may implement a loop that overwrites each byte three times before releasing it, in which WCET complexity would be O(c)
. Equally, such an implementation may use some way to do that overwriting in constant time.
It's then possible to work out complexity for particular cases. For example.
Assuming release of storage (raw memory) is a constant time operation, the elements don't have a destructor, and the implementation is optimised for that case, the WCET complexity of vector
s destructor will be O(1)
.
Assuming release of storage (raw memory) is a constant time operation, and elements are destructed in a loop by calling their destructor, that capacity of the vector and its size are equal, then WCET complexity of vector
s destructor will be O(n*d)
where d
describes complexity of the elements's constructor. If destruction of an element is a constant-time operation, this means complexity of O(n)
.