-2

I'am currently learning c++. During some heap allocation exercises I tried to generate a bad allocation. My physical memory is about 38GB. Why is it possible to allocate such a high amount of memory? Is my basic calculation of bytes wrong? I don't get it. Can anyone give me a hint please? Thx.

#include <iostream>


int main(int argc, char **argv){
    const size_t MAXLOOPS {1'000'000'000};
    const size_t NUMINTS {2'000'000'000};
    int* p_memory {nullptr};

    std::cout << "Starting program heap_overflow.cpp" << std::endl;
    std::cout << "Max Loops: " << MAXLOOPS << std::endl;
    std::cout << "Number of Int per allocation: " << NUMINTS << std::endl;

    for(size_t loop=0; loop<MAXLOOPS; ++loop){
        std::cout << "Trying to allocate new heap in loop " << loop 
                << ". current allocated mem = " << (NUMINTS * loop * sizeof(int)) 
                << " Bytes." << std::endl;

        p_memory = new (std::nothrow) int[NUMINTS];
        if (nullptr != p_memory)
            std::cout << "Mem Allocation ok." << std::endl;
        else {
            std::cout << "Mem Allocation FAILED!." << std::endl;
            break;
        }
    }
    return 0;
}

Output:

...
Trying to allocate new heap in loop 17590. current allocated mem = 140720000000000 Bytes.
Mem Allocation ok.
Trying to allocate new heap in loop 17591. current allocated mem = 140728000000000 Bytes.
Mem Allocation FAILED!.

BBGhigno
  • 3
  • 1
  • This is what you are seeing : https://en.wikipedia.org/wiki/Virtual_memory. Basically the operating system can "swap out" memory to disk when you're not accessing it. It will ensure it is "swapped in" into memory if you need it again. – Pepijn Kramer Dec 01 '21 at 19:17
  • Because physical pages are *not* allocated until + unless you touch your address space. What initially happens, in effect, is that the kernel’s data structure describing your process’ address space is extended. That’s all. Only once you load/store from/to your new virtual address space will something happen — page faults. The CPU checks the TLB, then checks the page tables, still finds nothing, switches to privileged mode and jumps into the kernel, the kernel realizes “OK, this is in fact allowed”, maps a physical page to back the virtual page — **then** you are *really* using physical memory. – Andrej Podzimek Dec 01 '21 at 19:34
  • Another way to express that same principle: Untouched virtual address space stays “purely” virtual, without physical memory backing (with a per-(jumbo-)page granularity). – Andrej Podzimek Dec 01 '21 at 19:42
  • Thank you for the explanations and I think I understand now. I will dig deeper into that topic to enhance my knowledge. Also thanks for the links. – BBGhigno Dec 01 '21 at 19:52

1 Answers1

1

Many (but not all) virtual-memory-capable operating systems use a concept known as demand-paging - when you allocate memory, you perform bookkeeping allowing you to use that memory. However, you do not reserve actual pages of physical memory at that time.1

When you actually attempt to read or write to any byte within a page of that allocated memory, a page fault occurs. The fault handler detects that the page has been pre-allocated but not demand-paged in. It then reserves a page of physical memory, and sets up the PTE before returning control to the program.

If you attempt to write into the memory you allocate right after each allocation, you may find that you run out of physical memory much faster.

Notes:

1 It is possible to have an OS implementation that supports virtual memory but immediately allocates physical memory to back virtual allocations; virtual memory is a necessary, but not sufficient condition, to replicate your experiment.

One comment mentions swapping to disk. This is likely a red herring - the pagefile size is typically comparable in size to memory, and the total allocation was around 140 TB which is much larger than individual disks. It's also ineffective to page-to-disk empty, untouched pages.

nanofarad
  • 40,330
  • 4
  • 86
  • 117