4

I'm using c++ builder 10.2 Tokyo on windows 10 with 16 GB RAM. If I run

uint64_t FreeMBs()
{
    MEMORYSTATUSEX status;
    status.dwLength = sizeof(status);
    GlobalMemoryStatusEx(&status);
    return status.ullAvailPhys / (1024 * 1024);
}

Mem0=FreeMBs();
std::vector<int64_t> v;
v.resize(1000000000); // 1 billion
Mem1=FreeMBs();

Mem0-Mem1 is around 8 GB.

If, instead of the above, I run

Mem0=FreeMBs();
int64_t v=new int64_t[1000000000};
Mem1=FreeMBs();

then Mem0-Mem1 is around zero. If I use malloc to reserve space for the array Mem1 is still more or less unchanged from Mem0. I tried setting v[1000000000-1]=0 to see if that triggered something but it didn't.

Why doesn't it account for the array?

Asesh
  • 3,186
  • 2
  • 21
  • 31
NoComprende
  • 731
  • 4
  • 14
  • Maybe try a little harder to actually get those pages into memory. Try just looping over the array and setting bytes every now and then. – cadaniluk May 27 '18 at 11:50

1 Answers1

5

Write to the array and you'll see different results.

The OS is simply not backing your allocation with physical pages before it needs to. Which is a good strategy, since in many cases applications ask for memory that they then never touch. By waiting to satisfy the allocation until actually needed (when you write to a page) and instead satisfy the allocation in the page fault handler, a lot of memory is saved by the system as a whole.

In other words; When you allocate memory you usually just get a range of virtual addresses, but the mapping to/allocation of real physical memory happens later (or even not at all if you never touch it).

Additionally, in some cases, if you never ever read from the memory you allocate then the compiler may optimize away all stores to that memory since you obviously don't care about it.

Jesper Juhl
  • 30,449
  • 3
  • 47
  • 70
  • Thanks Jesper & cadaniluk. Why though doesn't the same thing happen with vectors? It's not like the vector was initialised. – NoComprende May 27 '18 at 13:08
  • @NoComprende hard to tell without looking into your standard librarys implementation of `vector`. I would *expect* the same behaviour, but it *might* be allocating memory in a different manner or from a different pool or (if doing a debug build, *may* be zeroing the memory). Hard to say without reading the source (you should do so and come back and enlighten us when you've found out). – Jesper Juhl May 27 '18 at 13:13
  • 1
    Additionally, an allocator may first call [`VirtualAlloc`](https://msdn.microsoft.com/en-us/library/aa366887) to reserve a block of memory, and wait to commit blocks until they're accessed. This way an unused allocation doesn't count against the process commit charge. Once a block is committed, the OS memory manager allocates pages on demand from the zero-page list, as Jesper explained. – Eryk Sun May 27 '18 at 20:58
  • 1
    @NoComprende when you resize a `vector`, if it grows in size, it initializes every new element. So it has to touch every element of memory, which would force the OS to commit physical storage for it. Also, keep in mind that the memory manager in C++Builder pre-allocates memory in blocks, and caches freed memory for reuse, so not every allocation made in code will cause a change in memory usage at the OS level. Asking the OS for memory status can only tell you how much memory the app has asked the OS for, but doesn't tell you how the app is using that memory. – Remy Lebeau May 28 '18 at 00:13
  • Thanks eryksun & remy. I'm finding some useful memory stuff on this page https://stackoverflow.com/questions/63166/how-to-determine-cpu-and-memory-consumption-from-inside-a-process – NoComprende May 29 '18 at 08:03