5

I'm trying to do an exercise form Stroustrup's C++PL4 book. The task is:

Allocate so much memory using new that bad_alloc is thrown. Report how much memory was allocated and how much time it took. Do this twice: once not writing to the allocated memory and once writing to each element.

The following code doesn't throw a std::bad_alloc exception. After I execute the program I get message "Killed" in terminal.

Also. The following code exits in ~4 seconds. But when I uncomment memory usage message

// ++i;
// std::cout << "Allocated " << i*80 << " MB so far\n";

Program will run for few minutes. After some time it prints that terabytes of memory has been allocated but I don't see much change in System Monitor app. Why is that?

I use Linux and System Monitor app to see usages.

#include <iostream>
#include <vector>
#include <chrono>

void f()
{
    std::vector<int*> vpi {};
    int i {};
    try{
        for(;;){
            int* pi = new int[10000];
            vpi.push_back(pi);
            // ++i;
            // std::cout << "Allocated " << i*80 << " MB so far\n";
        }       
    }
    catch(std::bad_alloc){
        std::cerr << "Memory exhausted\n";
    }
}

int main() {
    auto t0 = std::chrono::high_resolution_clock::now();
    f();
    auto t1 = std::chrono::high_resolution_clock::now();
    std::cout << std::chrono::duration_cast<std::chrono::milliseconds>(t0-t1).count() << " ms\n";
}
  • The output will take *a lot* more time than a simple memory allocation. If it runs for 2 minutes, it might be 1:56 for the output and still 4 seconds for the allocations. – Bo Persson Sep 20 '15 at 10:33
  • That was my intuition. But why it prints that, let's say, terabytes of memory was allocated so far? –  Sep 20 '15 at 10:37
  • possible duplicate of [A way to determine a process's "real" memory usage, i.e. private dirty RSS?](http://stackoverflow.com/questions/118307/a-way-to-determine-a-processs-real-memory-usage-i-e-private-dirty-rss) – danielschemmel Sep 20 '15 at 10:45
  • 1
    In the modern cruel world calling `new` (as well as `malloc` or even `brk()`) doesn't necessarily allocate memory. It just sends (through a chain of layers) a request to OS and OS assigns a _virtual_ memory area (rounded to pages). So only _accessing_ to a given page performs memory allocation. Moreover modern OSes allow "overcommit" that is allocating more memory from different applications in sum that OS can allocate, even with swap. This is done because quite rarely all applications need all allocated memory at once, and it's possible to serve their actual needs sequently. – user3159253 Sep 20 '15 at 10:52
  • 1
    In the worst case when apps acess their memory indeed, a special system service called OOM Killer comes into the scene and kill apps (almost randomly :)). So relying on `bad_alloc` is a bad idea indeed, it may be raised, maybe not, depends on current OS settings and other applications behaviour in the moment of app execution. – user3159253 Sep 20 '15 at 10:54
  • 1
    To _increase chances_ to actually allocate a virtual page, you may access a just allocated element, but again this is not a verbatim warranty, just increasing possibilities – user3159253 Sep 20 '15 at 11:01
  • 1
    Also check this article: http://opsmonkey.blogspot.ru/2007/01/linux-memory-overcommit.html – user3159253 Sep 20 '15 at 11:03
  • @user3159253: Why don't you put all of this into an answer? – Christian Hackl Sep 20 '15 at 11:27

1 Answers1

4

In the modern cruel world calling new (as well as malloc() or even brk()) doesn't necessarily allocate memory. It just sends (through a chain of layers) a request to an OS and the OS assigns a virtual memory area (rounded to system memory pages). So only subsequent accessing to a given memory performs actual allocation.

Moreover modern OSes allow memory "overcommit". Sometimes (depending on OS and its settings) applications can demand totally more memory that the OS could assign even theoretically, including all its swap areas etc, all w/o any visible problem. Look at this page for example.

This is done because in real life a situation when all applications would actually use all allocated memory in the same time is quite improbable. More often, 99.99..% of time, applications use only parts of their memory and do it sequently, so an OS has a chance to serve their requests seamlessly.

To increase chances to actually cause a memory allocation error, you may access the just allocated element, but again I wouldn't call it a verbatim warranty, just "about increasing possibilities".

In the worst case when such an OS actually finds that it can't assign enough (virtual) memory because too many apps simultaneously requested access to their seamingly allocated data, OS memory manager initiates a special procedure called "OOM killer" which simply kills heuristically (= randomly :)) chosen applications.

So relying on bad_alloc is a bad idea nowadays. Sometimes you can realiably receive it (e.g. when artificially limiting your app with ulimit/setrlimit), but in general your application will run in an environment which won't guarantee anything. Just not be a memory hog and pray for the rest :)

user3159253
  • 16,836
  • 3
  • 30
  • 56