What you're seeing is a memory allocation optimization. See this answer for more details on how allocation and deallocation work. Basically, it's very inefficient to allocate each byte at a time from the OS (or maybe impossible alltogether), because it usually handles memory in pages (on my system 4KB) and not single bytes.
The compiler vendors for your platform (Microsoft if you use msvc in this case) know this very well and implement the low level memory allocations in a way so that they handle sub-page allocations. malloc
for example, may allocate one page of memory on application startup. The program may doesn't need that much memory at startup so malloc keeps track of what it requested from the OS and what the application requested from malloc. If malloc runs out of space it requests another page from the OS.
The standard library container (like std::vector
) work in a very similar way (see std::vector::resize
and std::vector::reserve
).
If you ramp up your allocation to 10 bytes per iteration (instead of 1) you will see the memory changing rather quickly. In my case you can see the transition happen:
There are 137434705464 free KB of virtual memory.
There are 137434705464 free KB of virtual memory.
There are 137434705464 free KB of virtual memory.
There are 137434701368 free KB of virtual memory.
There are 137434701368 free KB of virtual memory.
There are 137434701368 free KB of virtual memory.
You can see there is one change of 4096 bytes which matches the page size on my system. At this point malloc (or whatever allocation function is used) ran out of reserved memory and requested a new chunk.
Note that I used malloc
here as a placeholder for any common memory allocation function.