Suppose I use a std::unordered_set<MyClass>
, and suppose sizeof(MyClass)
is large, i.e. much larger than sizeof(size_t)
and sizeof(void*)
. I will add a large number numberOfElementsToBeAdded
of elements to the unordered set.
From https://stackoverflow.com/a/25438497/4237 , I find: "each memory allocation may be rounded up to a size convenient for the memory allocation library's management - e.g. the next power of two, which approaches 100% worst case inefficiency and 50% average, so let's add 50%, just for the list nodes as the buckets are likely powers of two given size_t and a pointer: 50% * size() * (sizeof(void*) + sizeof((M::value_type))"
This drives me to conclude that actual memory consumption will be between
1*numberOfElements*sizeof(MyClass)
and (1+1)*numberOfElements*sizeof(MyClass)
, modulo some additional memory, which is of little interest here because it is of the order sizeof(size_t)
.
However, I know the number of elements to be added in advance, so I call:
std::unordered_set<MyClass> set;
set.reserve(numberOfElementsToBeAdded);
//Insert elements
Thinking about the parallel of calling std::vector::reserve, I would guess that this avoids the potential overhead, and would thus guarantee memory consumption to be approximately 1*numberOfElements*sizeof(MyClass)
(modulo some additional memory, again of order sizeof(size_t)
).
Could I rely on implementations of the standard library to guarantee that calling reserve
like this will avoid the 0-100% (50% average) overhead mentioned in the above answer?