You are allocating 160000 of your A
s with 5000 int
in them, with usually around 4 bytes in size, so you are allocating 160000*5000*4 bytes, and which are /1024 = 3.125.000 kibiBytes and /1024 = 3.051,7578125 Mebibytes so around 3 GB, which is close to the upper limit, what a 32-bit process can get, and I assume, even when running x64 windows, you compiled it with the default x86 settings, meaning it will run with the 32bit compatibility mode in windows. Add the overhead of your 160000 pointers, which you stored in the most inefficient std
container, plus overhead of paging `stuff, plus the likely added padding, and you get out of memory.
To get back to your original question:
- The list is placed on the "stack" i.e. with automatic storage duration (more correct term), which means only the household data of it (for example a pointer to the first item, and one to the last and it's size), but not the items/nodes it contains, and even if it did, it does not contain the big stuff, i.e. your
A
s but simply pointers to them A*
s, which in turn, as in any std
container, except std::array
are stored on the heap, i.e. "dynamic storage duration", but are nothing compared in size to the 5000 ints they point to. As you allocated with new, your A
are never cleaned up until by a call with delete by you. C++ is very different from Java. And your Java code probably run as a 64bit process who knows what the VM did, as it saw that you wont use them in the future.
So if you want your A
s on the "stack" i.e. automatic storage duration, you can use a std::array<A,160000>
(which is a much better version of A[160000]), but I bet you will crash you stack with such sizes. (On most operating systems you get around 2MB of stack per Thread, but it can be much lower, and your call tree needs place too)
If you want your A
s on the "heap" i.e. with dynamic storage duration, i.e. in the list, use std::list<A>
instead of std::list<A*>
and remove your new
expression altogether. However because of multiple reasons, the best default container is std::vector<A>
which would store them in one big chunk of "heap" memory.
- There is no such explicit limit in the C++ standard, according to §3.7.4 of ISO/IEC 14882:2014,
new
either gets you the requested amount or more, or it fails, so it depends on your runtime i.e. implementation meaning operating system and compiler, but in general you can get as much as the operating system will give you, which as I said is around the 3-4GB for x86/32bit processes. Otherwise it can be much more, or in case of embedded applications, very less up to 0 (no dynamic allocation at all).