2

I'm writing a class that save the state of the connected components of a graph, supporting dynamic connectivity, and, each time a new edge is removed or added, I have to recalculate the neighbouring components to join or split them.

The only exceptions that those methods can throw is std::bad_alloc. No other exception will be thrown by any of my dependencies. So, the only possible exceptions are due to lack of memory by methods like std::unordered_set<...>::insert or std::deque<...>::push_back.

It complicates a lot the design of my algorithms, because I've to deal with local data to save differences, and then move all modifications based on those cached modifications in a well-scoped try-catch block.

The readability is severaly decreased and the time to think and write this exception-safe code increases a lot. Also, memory overcommit makes dealing with this exceptions a bit pointless.

What do you do in situations like that? Is it really important to ensure exception-safe code given that, if there is a real lack of memory, your code will probably fail anyway, but probably later, and the program as a whole will as well?

So, in short, is it worthy to deal with lack-of-memory exceptions at all, considering that, as one comment points out, that the very same exception throwing mechanism could exhaust memory as well?

ABu
  • 10,423
  • 6
  • 52
  • 103
  • 6
    Can you program recover if it runs out of memory? If not, who cares. Just let the exception terminate the application or catch it in `main` and deliver the appropriate error message. If it can, then you have to decide where that recovery point would be and let the exception bubble up to there. – NathanOliver Sep 09 '19 at 14:56
  • You should know that with glibc there is no guarantee the runtime will be able to even throw `std::bad_alloc`. The ABI requires exceptions to be allocated on heap and if this allocation fails, the throwing thread takes memory from the emergency pool which can easily be exhausted if you use nested exceptions, in which case the runtime does `std::terminate` and kills your process. See [this](https://stackoverflow.com/questions/45497684). in short, at least on Linux you cannot write out-of-memory safe code with C++. You should use C instead - that is the only way. –  Sep 09 '19 at 14:58
  • @Peregring-lk You can claim that your code provides only "basic exception guarantee" and leave everything simple. This is how most apps are written. Even if application can recover from OOM (which is easily done for servers), it usually implies the whole context associated with the job will be discarded. Strong exception guarantee is too "strong" for most use cases. –  Sep 09 '19 at 15:02
  • 1
    Regarding your edit, it depends on the circumstances. For example, in a GUI application, it can be worth trying to roll back to whatever user action caused the problem. For a terminal applications which typically just do one thing and either fail or succeed, it may be less worth while. Also consider what types of resources you are handling. If you need to flush things like committing database transactions or to gracefully close a connection it makes it more worth while. If you only use memory and simply output a result, it might be less worth it – François Andrieux Sep 09 '19 at 15:04

2 Answers2

2

As you suggested, trying to handle out-of-memory situations gracefully within a process is somewhere between extremely difficult and impossible, depending on the memory behavior of the OS you are running on. On many OS's (such as Linux when configured with its default settings) an out-of-memory scenario can result in your process being simply killed without any warning or recourse, no matter how carefully your code tries to handle bad_alloc exceptions.

My suggestion is to instead have your application launch a child process (i.e. it can launch itself as a child process, with a special argument to let the child process know that it is the child process and not the parent process). Then the parent process simply waits for the child process to exit, and if it does, it relaunches the child process.

That has the advantage of not only being able to recover from an out-of-memory situation, but also helping you recover from any other problem that might cause your child process to crash or otherwise prematurely exit.

Jeremy Friesner
  • 70,199
  • 15
  • 131
  • 234
0

It is almost impossible to ensure desired OOM handling on application level especially because as @StaceyGirl mentioned, there is no guarantee you will be even able to throw std::bad_alloc. Instead it is much more important (and easy) to manage memory allocation. By using memory pools and smart pointer templates you can achieve several advantages:

  • cleaner code
  • single place where your memory allocation can fail and thus should be handled
  • ability to ensure your application has required (or planned) amount of memory
  • graceful degradation. Since you are decoupling "Allocation Next Memory Chunk to Pool" event from "Give me some memory from pool" Request, at the moment of truth (std::unordered_set<...>::insert etc.) you will be able to handle it gracefully (not by throwing exception) and your program will not halt unexpectedly.
SiR
  • 177
  • 6