0

I was reading Programming Principles and Practice Using C++ (second edition) by Bjarne Stroustrup. Exercise 6 of chapter 17 (on page 624) wants the reader to find out what happens when running out of memory using new. The following is my code:

int c = 0;  // number of MB allocated

int main()
try
{
    cout << sizeof(double) << '\n';  // 8
    int n = 1024 * 1024;  // number of bytes in 1 MB
    while (true)
    {
        new double[n];  // allocates 8 MB on the heap
        c += 8;
        cout << c << '\n';
    }
}
catch (exception& e)
{
    cout << "Error: " << e.what() << " | " << c / 1024.0 << " GB" << '\n';
    // Error: bad allocation | 237.281 GB
}

But I definitely don't have enough space for 237.281 GB on my main memory (my main memory is 64 GB). I'm running the program using Visual Studio 2022 on a 64-bits Windows 10 PC. May I ask what happens to the allocation?

CPPL
  • 726
  • 1
  • 10
  • 2
    Try writing into this memory, the pages for it might be lazily allocated on the first use only. Also, are you sure the compiler did not optimize the allocation away? It can do that. – Quimby May 20 '22 at 10:31
  • 1
    Windows will lazily allocate, but unlike linux-like systems it'll always guarantee that there's memory available when needed. It can do this as it also has a page file which can provide extra memory on demand. See https://learn.microsoft.com/en-us/windows/client-management/introduction-page-file. Also see https://stackoverflow.com/questions/22174310/windows-commit-size-vs-virtual-size – Mike Vine May 20 '22 at 10:38

1 Answers1

0

Many programs allocate memory in big chunks and then only use part of it. Implementations of malloc() and new probably do that internally even if your program doesn't do it itself.

This lead to programs getting more memory than they actually use and the system runs out of memory quicker. So modern OSes don't actually assign any memory when the program calls malloc() or new but just mark down that the program is allowed to access a certain part of the address space. Only later when the program tries to write to that part the kernel will assign actual memory to the needed address (usually with a granularity of 4k).

Your code allocates but never uses any memory. So the kernel just gives you an IOU without ever having to give the program any actual memory. This is often called memory overcommit. Windows does this by default. Linux has settings for this in /proc/sys/vm/overcommit_* to fine tune it to your needs.

Note: If you actually use the memory the system will first assign physical memory to the parts you use. When it runs out it will use swap. When that also runs out your program will get a segfault.

A system with memory overcommit means malloc() and new basically never ever fail and dealing with out-of-memory becomes impossible since your program simply dies at some unspecified later time. Even the user opening a webpage in a browser can mean your program gets killed because when it runs out of memory the system will pick some (more or less random) task to kill to regain some.

This means two things:

  1. A normal program can simply ignore out-of-memory. It never happens or, if it does, it can't handle it anyway. Don't waste your time catching the bad_alloc exception.

  2. A critical program must make sure the system doesn't use memory overcommit or any attempt to deal with out-of-memory will just fail.

Goswin von Brederlow
  • 11,875
  • 2
  • 24
  • 42