There is such a type std::size_t
. It may be used for describing the size of object as it is guaranteed to be able to express the maximum size of any object (so is written here). But what does that mean? We actually have no objects in memory. So does that mean that this type can store an integer that represents the largest amount of memory we theoretically can use?
If I try to write something like
size_t maxSize = std::numeric_limits<std::size_t>::max();
new char[maxSize];
I will get an error because the total size of the array is limited to 0x7fffffff. Why?
Moreover, if I pass a nonconstant expression which is equal to maxSize
, std::bad_array_new_length
will be thrown. If I pass an expression which is less than maxSize
but still greater than 0x7fffffff, std::bad_alloc
will be thrown. I suppose that std::bad_alloc
is thrown because of lack of memory, not because the size is greater than 0x7fffffff. Why does it happen so? I guess it is natural to throw a special exception if the size of memory we what to allocate is greater than 0x7fffffff (which is the max value for the const which is passed to new[] at compile time). And why is std::bad_array_new_length
thrown only if I pass maxSize
? Is this case special?
By the way, if I pass maxSize to the vector's constructor like this:
vector<char> vec(maxSize);
std::bad_alloc
will be thrown, not std::bad_array_new_length
. Does that mean that vector uses different allocator?
I'm trying to make an implementation of array on my own. Using unsigned int to store the size, capacity and indices is a bad approach. So is it a good idea to define some alias like this:
typedef std::size_t size_type;
and use size_type
instead of unsigned int
?