2

My question has two parts:

  • Is it possible that, if a segfault occurs after allocating memory but before freeing it, this leaks memory (that is, the memory is never freed resulting in a memory leak)?
  • If so, is there any way to ensure allocated memory is cleaned up in the event of a segfault?

I've been reading about memory manamgement in C++ but was unable to find anything about my specific question.

Miles Yucht
  • 538
  • 5
  • 12

5 Answers5

6

In the event of a seg fault, the OS is responsible for cleaning up all the resources held by your program.

Edit:

Modern operating systems will clean up any leaked memory regardless of how your program terminates. The memory is only leaked for the life of your program. Most OS's will also clean up many other types of resources such as open files and socket connections.

jlunavtgrad
  • 997
  • 1
  • 11
  • 21
  • This is not necessarily true. Application may catch the signal and try to handle it gracefully. But.. of course, in 99.9% it is technically so hard to do so programs just dump stack and terminate with error code. In case of that happening, you are right - the OS releases most but not necessarily all resources of the process. And it also depends on OS and application. For example, it might be a device driver segfaulting :) –  Jun 18 '12 at 16:33
  • 2
    even if the application captures the segfault. the OS will clean the memory when the application closes. or it should clean it. – Jarry Jun 18 '12 at 16:37
  • @VladLazarenko If a device driver segfaults, the OS will crash, requiring a reboot. Which means that all memory will be recovered. – James Kanze Jun 18 '12 at 17:20
  • 1
    @JamesKanze: Not necessarily. Kernel can try to recover from a device driver fault. Sometimes it render itself useless and does not go into reboot. –  Jun 18 '12 at 18:21
4

Is it possible that, if a segfault occurs after allocating memory but before freeing it, this leaks memory (that is, the memory is never freed resulting in a memory leak)?

Yes and No: The process which crashes should be tiedied completely by the OS. However consider other processes spawned by your process: They might not get terminated completely. However usually these shouldn't take too many resources at all, but this may vary depending on your program. See http://en.wikipedia.org/wiki/Zombie_process

If so, is there any way to ensure allocated memory is cleaned up in the event of a segfault?

In case the program is non critical (meaning there are no lives at stake if it crashes) I suggest fixing the segmentation fault. If you really need to be able to handle segmentation faults see the answer on this topic: How to catch segmentation fault in Linux?

UPDATE: Please note that despite the fact that it is possible to handle SIGSEGV signals (and continuning in program flow) it is not a secure way to rely on, since - as pointed out in the comments below - it is undefined behaviour meaning differen platforms/compilers/... may react differently.

So by any means possible fixing segmentation faults (as well as access violations on windows) should have first priority. Still using the suggested solution to handle signals this way must be thoroughly tested and if put in production code you must be aware of it and draw any consequences - which may vary and depend on your requirements so I will not name any.

Community
  • 1
  • 1
Alex
  • 5,240
  • 1
  • 31
  • 38
  • 1
    Concerning the referred to article: the accepted response is simply wrong; there is no way to convert a signal into an exception. – James Kanze Jun 18 '12 at 17:22
  • In the end, I will have fixed the segfault; I was just concerned with the possibility of debugging and meanwhile building up large memory leaks. Thanks so much! – Miles Yucht Jun 18 '12 at 17:25
  • @JamesKanze I did not test the answer from the linke article (yet), however the linked library and its code look plausible to me, as well as the author of the answer said he tested it. Though I agree it is not possible by just using try catch without additional work, my question is: Why exactly do you think the proposed solution is not possible at all? – Alex Jun 19 '12 at 07:18
  • @Vash Raising an exception in a signal handler is undefined behavior. On many systems, it will work 99.9% of the time, and result in a crash (or some other unpredictable behavior) the remaining 0.1%. On others, it will simply cause the program to terminate, even if there is a try block. (And because it is undefined behavior, you can't "test" it. It might work in your test, but fail in production code.) It's possible to implement signal handling and exceptions so that this would work, but I don't know of anyone who does. – James Kanze Jun 19 '12 at 07:56
  • @JamesKanze thanks for the clarification on undefined behavior. I updated my answer to reflect this. – Alex Jun 19 '12 at 09:04
2

The C++ standard does not concern itself with seg-faults (that's a platform-specific thing).

In practice, it really depends on what you do, and what your definition of "memory leak" is. In theory, you can register a handler for the seg-fault signal, in which you can do all necessary cleanup. However, any modern OS will automatically clear up a terminating process anyway.

Oliver Charlesworth
  • 267,707
  • 33
  • 569
  • 680
0

One, there are resources the system is responsible for cleaning up. One of them is memory. You do not have to worry about permanently leaked RAM on a segfault.

Two, there are resources that the system is not responsible for cleaning up. You could write a program that inserts its pid into a database and removes it on close. That won't be removed on a segfault. You can either 1) add a handler to clean up that sort of non-system resources, or 2) fix the bugs in your program to begin with.

djechlin
  • 59,258
  • 35
  • 162
  • 290
0

Modern operating systems separate applications' memory as to be able to clean up after them. The problem with segmentation faults is that they normally only occur when something goes wrong. At this point, the default behavior is to shut down the application because it's no longer functioning as expected.

Likewise, unless under some bizarre circumstance, your application has likely done something you cannot account for if it has hit a segmentation fault. So, "cleaning up" is likely practically impossible. It's not out of the realm of possibility to use memory carefully similar to transactional databases in order to guarantee an acceptable roll-back state, however doing so (and depending on how fine-grained of a level) may be beyond tedious.

A more practical version may be to provide your own sort of sandboxing between application components and reboot a component if it dies, restoring it to a acceptable previous saved state wholesale. That way you can flush all of its allocated memory and have it start from scratch. You still lose whatever data it hadn't saved as of the last checkpoint, however.

Kaganar
  • 6,540
  • 2
  • 26
  • 59