3

I am trying to save a vector full of pointer to Circle objects. Sometimes the bad_alloc catch works, but sometimes it doesn't, then I get the error message:

This application has requested the Runtime to terminate it in an unusual way. Please contact the application's support team for more information."

Maybe the vector array can't allocate more memory... But the bad_alloc doesn't catch it.

Circle *ptr;
vector<Circle*> ptrarray;

try{
  for (long long i = 0; i < 80000000; i++) {
    ptr = new Circle(1,i);
    ptrarray.push_back(ptr);
  }
}catch(bad_alloc &ba){
  cout << "Memory Leak" << endl;
}

Would be great if someone could help me ;) Thanks in advance

Madeye
  • 55
  • 2
  • 6
  • 1
    please add compiler/OS tag . If the system uses lazy allocation it would explain this. Posting a complete program would be useful too, so others can reproduce (and check that there are not bugs in the code you didn't show). – M.M May 15 '14 at 12:50
  • 1
    If you're really truly running out of memory, it's possible there aren't even enough resources to run your handler (there could be a dynamic allocation somewhere inside that `cout`, for example, which could trigger a second, uncaught throw), in which case the program will just abort() and you'll get the message you quoted. – dlf May 15 '14 at 12:56
  • You probably want to try `ptrarray.reserve(80000000);` before you enter your loop. – Paul R May 15 '14 at 12:59
  • 1
    @dlf Is there anything in the body that needs to get created? There maybe an implicit conversion to `std::string` but I don't think so. I think everything is static with respect to the scope of the handler. It would be quite strange if not enough memory for the handling mechanism itself... if any additional were needed. – luk32 May 15 '14 at 13:41
  • 1
    @luk32 I don't know; nothing in the code we see, but I don't know what the `<<` is doing internally, or the constructor of `bad_alloc`. – dlf May 15 '14 at 13:44
  • 1
    @dlf Well, even if ... IMO some memory should be reserved at the beginning of `try`, to ensure such a simple handler would not have a problem with memory. I find it quite dangerous. Even in case of other exception, if you are on borderline of OOM you can blow up, with something so simple. – luk32 May 15 '14 at 13:56
  • 1
    @luk32 I'm not trying to suggest any changes; just pointing out that if a second out of memory error occurred in the process of handling the first one, it would explain why the program spontaneously exited without running the exception handler. Really; you can't hope to handle out of memory conditions after they happen. – dlf May 15 '14 at 13:58

3 Answers3

2

Many Operating Systems will allow processes to request more virtual addresses (nominally available memory) than it has virtual memory to support, on the assumption that the processes may not actually access all the pages. Famously, this allows Sparse Arrays to be practical on such systems. But, as you access each page the CPU generates an interrupt and the OS must find physical memory to back that page (swapping out to non-RAM swap disk/files etc too if configured) - when all options are exhausted (or sometimes when your OS is dangerously close to the limit and some protective process decides it's better to kill some processes than let known critical ones start failing), you may get an error like you've observed. Ultimately, there's no control over this at the C++ level. You can reserve and write all pages quickly so you'll likely fail before doing all your processing, but even then you may be terminated in a desperately low-memory situation.


Separately, you may be able to fit a lot more circles in to memory if you store them by value. That said, you may not if sizeof(Circle) > sizeof(Circle*) and fragmentation is limiting you, in which case you might try a std::deque. Anyway:

try
{
    std::vector<Circle> array;
    array.reserve(80000000);
    for (long long i = 0; i < 80000000; i++) {
        array.emplace_back(1, i);
}
catch (const bad_alloc& ba)
{
    std::cerr << "Memory Exhaustion\n";
}
Tony Delroy
  • 102,968
  • 15
  • 177
  • 252
  • 1
    Whoa, doesn't such approach invalidate the sense of trying to catch `bad_alloc`? It sounds quite dangerous, because you can execute some statements that have permanent consequences and blow up later. It would impossible to implement anything transactional. Or am I wrong? – luk32 May 15 '14 at 13:31
  • 1
    Such an approach does indeed invalidate the concept of `std::bad_alloc`. That's life in the modern computer age. Konrad Rudolph's advice in this answer, http://stackoverflow.com/a/9456758/774499, is spot-on: *In general you cannot and should not try to respond to this error.* – David Hammen May 15 '14 at 13:34
  • Wow, so in general there is nothing you can be sure to be safe in case of running OOM. Bummers, good that RAM is cheap. – luk32 May 15 '14 at 13:55
  • 1
    @luk32: RAM is cheap, and disk for swap is a lot cheaper, so virtual memory can be very generous, but if the programs are accessing enough of the allocated virtual memory that they're often blocked on page faults then the whole system starts to crawl. Your best bet is to configure the system to prioritorise processes appropriately, e.g. for Linux [read this](http://www.oracle.com/technetwork/articles/servers-storage-dev/oom-killer-1911807.html) – Tony Delroy May 16 '14 at 00:59
1

Monitor your process memory via task manager - you might consume all memory allowed for the process (pending your starting point and the size of Circle).

if you are on a Win32 machine, then you have ~2GB of process memory space for this operation

NirMH
  • 4,769
  • 3
  • 44
  • 69
  • The problem is not with running out of memory, but in a fact that the appropriate exception is not emitted or caught. – luk32 May 15 '14 at 13:33
0

First, how are you sure that the only possible exception thrown is std::bad_alloc? I would highly recommend adding a catch (...) block after your catch (const bad_alloc&) block just to verify that you're right. Of course, with catch (...) you won't know what was caught, only that it wasn't bad_alloc.

Second, if you somehow trigger undefined behavior (say, by dereferencing a NULL pointer), you won't necessarily get an exception; you won't necessarily get any behavior that makes sense according to the language rules.

Third, as already suggested on Linux you might be triggering the out of memory killer. It's not truly standards compliant behavior, but it's behavior you can run into in real life.

Max Lybbert
  • 19,717
  • 4
  • 46
  • 69