122

I've run into memory leaks many times. Usually when I'm malloc-ing like there's no tomorrow, or dangling FILE *s like dirty laundry. I generally assume (read: hope desperately) that all memory is cleaned up at least when the program terminates. Are there any situations where leaked memory won't be collected when the program terminates, or crashes?

If the answer varies widely from language-to-language, then let's focus on C(++).

Please note hyperbolic usage of the phrase, 'like there's no tomorrow', and 'dangling ... like dirty laundry'. Unsafe* malloc*ing can hurt the ones you love. Also, please use caution with dirty laundry.

Peter Mortensen
  • 30,738
  • 21
  • 105
  • 131
DilithiumMatrix
  • 17,795
  • 22
  • 77
  • 119
  • 3
    If you're running with a "modern" OS like Linux or Windows, then the OS itself will resolve any unreleased memory when the program terminates. – Oliver Charlesworth Mar 17 '13 at 23:00
  • 61
    Instead of malloc-ing like there's no tomorrow, try pretending there is a tomorrow and keep track of your memory! – William Pursell Mar 17 '13 at 23:00
  • 8
    @WilliamPursell ah, so you're saying one should `calloc` like there's no tomorrow. Excellent. – DilithiumMatrix Mar 17 '13 at 23:04
  • 8
    "If the answer varies widely from language-to-language, then lets focus on c(++)." [tag:c] and [tag:c++] are not the same language! – johnsyweb Mar 17 '13 at 23:17
  • 2
    *"all memory is cleaned up when the program terminates"* - By programming the way you described, you'd run out of the memory faster than you think you would. Note that you won't always work on applications, that will run just for few minutes... Once you start collaborating on some serious projects, you'll realize the importance of proper memory management. – LihO Mar 17 '13 at 23:20
  • 1
    @LihO: I think the question is more about, `when working in short-lived applications, does memory management matters?` – Lie Ryan Mar 17 '13 at 23:26
  • 2
    @LieRyan: My point is, that there should be proper management and no memory leaks always when possible, no matter what kind of project it is. – LihO Mar 17 '13 at 23:28
  • 11
    @zhermes: Comment about C and C++ being different languages hides more than you think... In C++ you'll rather find yourself taking advantage of objects with automatic storage duration, follow RAII idiom... you let these objects take care of memory management for you. – LihO Mar 17 '13 at 23:54
  • 1
    If you don't want to handle your memory by hand, look for a garbage collector, like [Boehm's](http://en.wikipedia.org/wiki/Boehm_garbage_collector). – vonbrand Mar 18 '13 at 02:28
  • What about COM objects? I read they are not released if you forget to call release on them, even after the application closes. – EddieV223 Mar 18 '13 at 06:22
  • I wonder what the folks from http://security.stackexchange.com would say about this... – Tobias Kienzler Mar 18 '13 at 14:46
  • 1
    Its not exactly answer to your question but I would like to share with you this article [**When Linux Runs Out of Memory**](http://www.linuxdevcenter.com/pub/a/linux/2006/11/30/linux-out-of-memory.html?page=1) read it very helpful... – Grijesh Chauhan Mar 18 '13 at 17:33
  • If I understand correctly, this is one of the main reasons why Windows NT/2000/XP/Vista/8 are so much more stable than Windows 95/98/ME were: the 95 kernel wasn't able to clean up after apps very well (if at all?) so one program going crazy and crashing could easily bring down the whole system. – Kip Mar 23 '13 at 20:33
  • In C++, it's better to use the STL containers wherever possible rather than doing new's and deletes. The STL containers clean up well after themselves. – ruben2020 Apr 10 '13 at 03:02
  • Related question: http://stackoverflow.com/questions/1060160/os-resources-automatically-clean-up/2645869#2645869 – Adrian McCarthy Jul 28 '13 at 17:14

9 Answers9

117

No. Operating systems free all resources held by processes when they exit.

This applies to all resources the operating system maintains: memory, open files, network connections, window handles...

That said, if the program is running on an embedded system without an operating system, or with a very simple or buggy operating system, the memory might be unusable until a reboot. But if you were in that situation you probably wouldn't be asking this question.

The operating system may take a long time to free certain resources. For example the TCP port that a network server uses to accept connections may take minutes to become free, even if properly closed by the program. A networked program may also hold remote resources such as database objects. The remote system should free those resources when the network connection is lost, but it may take even longer than the local operating system.

Joni
  • 108,737
  • 14
  • 143
  • 193
  • ... because they **need** to take care of all the memory pages which belonged to the process. – ulidtko Mar 17 '13 at 23:01
  • 5
    A common paradigm in RTOSs is the single-process, multiple thread model, and no memory protection between 'tasks'. There's usually one heap. This is certainly how VxWorks used to work - and probably still does. – marko Mar 17 '13 at 23:23
  • 30
    Note that not all resources can be freed by the operating system. Network connections, database transactions, etc, not closing them explicitly may cause some undesirable results. Not closing network connection may cause the server to think you're still active for an indefinite period of time, and for servers that limits the number of active connections, may accidentally cause denial of service. Not closing database transactions may cause you to lose uncommitted data. – Lie Ryan Mar 17 '13 at 23:25
  • Btw, in Windows, even though GDI objects will be freed at program termination, if too many of them get allocated by a program, other programs and the entire desktop can start failing to function. There's a hard limit on how many there can be. So, leaking resources isn't necessarily only a performance problem. – Alexey Frunze Mar 18 '13 at 07:38
  • 1
    @Marko : Recent version of vxWorks now support RTPs (real time processes) which support memory protection. – Xavier T. Mar 18 '13 at 08:37
  • 21
    *"Operating systems free all resources held by processes when they exit."* Not strictly true. For example, on (at least) Linux, SysV semaphores and other IPC objects are not cleaned up on process exit. That's why there's `ipcrm` for manual cleanup, http://linux.die.net/man/8/ipcrm . – sleske Mar 18 '13 at 09:07
  • 7
    Also, if an object has a temporary file that it maintains, that _clearly_ won't get cleaned up afterwards. – Mooing Duck Mar 19 '13 at 20:13
  • @sleske: my two cents: some OSes *have* memory leaks too! Ever heard about Windows? – Gianluca Ghettini Apr 12 '13 at 08:39
  • Embedded systems aren't the only ones that aren't freeing resources. For example, Windows Kernel drivers are responsible for their own clean up. I suppose the same is true for Linux. – SomeWittyUsername Jul 28 '13 at 15:22
  • 1
    Another exception: for X11 applications it is possible to create graphics resources which remain in memory on the X server even after the application has exited, crashed or otherwise closed its X server connection. See http://tronche.com/gui/x/xlib/display/close-operation.html and http://unix.stackexchange.com/a/9299 . – oliver Aug 14 '13 at 12:12
48

The C Standard does not specify that memory allocated by malloc is released when the program terminates. This done by the operating system and not all OSes (usually these are in the embedded world) release the memory when the program terminates.

ouah
  • 142,963
  • 15
  • 272
  • 331
  • 21
    That is more or less because the C standard talks about C programs, not the operating systems on which C happens to run... – vonbrand Mar 18 '13 at 02:32
  • 5
    @vonbrand The C Standard could have had a paragraph that says that when `main` returns all memory allocated by `malloc` is released. For example it says that all open files are closed before program termination. For memory allocated my `malloc`, it is just not specified. Now of course my sentence regarding OS describes what is usually done not what the Standard prescribes, as it does not specify anything on this. – ouah Mar 18 '13 at 09:03
  • Let me correct my comment: The standard talks about C, not on how the program is started and stopped. You can very well write a C program that runs _without_ an OS. In that case there is nobody who will do cleanup. The standard _very_ deliberately doesn't specify anything unless needed, so as to not constrain uses without need. – vonbrand Mar 18 '13 at 09:48
  • 2
    @ouah: "_when_ main returns...". That's an assumption. We have to consider "_if_ main returns...". `std::atexit` also considers program termination via `std::exit`, and then there's also `std::abort` and (C++ specific) `std::terminate`. – MSalters Mar 18 '13 at 12:17
  • @ouah: If that had been included, `atexit` would not be usable. :-) – R.. GitHub STOP HELPING ICE Aug 13 '13 at 18:28
29

As all the answers have covered most aspects of your question w.r.t. modern OSes, but historically, there is one that is worth mentioning if you have ever programmed in the DOS world. Terminant and Stay Resident (TSR) programs would usually return control to the system but would reside in memory which could be revived by a software / hardware interrupt. It was normal to see messages like "out of memory! try unloading some of your TSRs" when working on these OSes.

So technically the program terminates, but because it still resides on memory, any memory leak would not be released unless you unload the program.

So you can consider this to be another case apart from OSes not reclaiming memory either because it's buggy or because the embedded OS is designed to do so.

I remember one more example. Customer Information Control System (CICS), a transaction server which runs primarily on IBM mainframes is pseudo-conversational. When executed, it processes the user entered data, generates another set of data for the user, transferring to the user terminal node and terminates. On activating the attention key, it again revives to process another set of data. Because the way it behaves, technically again, the OS won't reclaim memory from the terminated CICS Programs, unless you recycle the CICS transaction server.

Peter Mortensen
  • 30,738
  • 21
  • 105
  • 131
Abhijit
  • 62,056
  • 18
  • 131
  • 204
  • That's really interesting, thanks for the historical note! Do you know if that paradigm was due to freeing memory being too computationally costly if it wasn't necessary? Or had the alternative just never been thought of yet? – DilithiumMatrix Mar 18 '13 at 04:44
  • 1
    @zhermes: It was computationally impossible, as DOS simply didn't track memory allocations for TSR's. Pretty much by definition: the goal was to _Stay Resident_. If you wanted your TSR to free some but not all memory, it was up to you to decide what to free. – MSalters Mar 18 '13 at 12:21
  • 2
    @zhermes: DOS (like CP/M, its forefather) wasn't what you'd call an operating system in the modern sense. It was really just a collection of I/O utilities that could be called in a standard way bundled with a command processor that would let you run one program at a time. There was no notion of processes, and memory was neither virtual nor protected. TSRs were a useful hack that could tell the system they were taking up to 64K of space and would hook themselves into interrupts so they'd get called. – Blrfl Mar 18 '13 at 21:17
9

Like the others have said, most operating systems will reclaim allocated memory upon process termination (and probably other resources like network sockets, file handles, etc).

Having said that, the memory may not be the only thing you need to worry about when dealing with new/delete (instead of raw malloc/free). The memory that's allocated in new may get reclaimed, but things that may be done in the destructors of the objects will not happen. Perhaps the destructor of some class writes a sentinel value into a file upon destruction. If the process just terminates, the file handle may get flushed and the memory reclaimed, but that sentinel value wouldn't get written.

Moral of the story, always clean up after yourself. Don't let things dangle. Don't rely on the OS cleaning up after you. Clean up after yourself.

Andre Kostur
  • 770
  • 1
  • 6
  • 15
  • 'Don't rely on the OS cleaning up after you. Clean up after yourself.' This is often imp... 'very, very difficult' with complex multithreaded apps. Actual leaks, where all references to a resource have been lost, is bad. Allowing the OS to clean up instead of explicitly releasing references is not always bad and often the only reasonable course to take. – Martin James Mar 18 '13 at 00:58
  • 1
    In C++, destructors _will_ get called on termination of the program (unless some less-than-bright `kill -9` fan shows up...) – vonbrand Mar 18 '13 at 02:37
  • @vonbrand True, but if we're talking about leaks with dynamic objects, those destructors won't occur. The object going out of scope is a raw pointer, and its destructor is a no-op. (Of course, see RAII objects to mitigate this issue...) – Andre Kostur Mar 18 '13 at 07:21
  • 1
    The problem with RAII is that it insists on deallocating objects on process exit that it isn't actually important to get rid of. DB connections you want to be careful with, but general memory is best cleaned up by the OS (it does a far better job). The problem manifests itself as a program that takes _absolutely ages_ to exit once the amount memory paged out goes up. It's also non-trivial to solve… – Donal Fellows Mar 18 '13 at 09:34
  • @vonbrand: It's not that simple. `std::exit` will call dtors, `std::abort` won't, uncaught exceptions might. – MSalters Mar 18 '13 at 12:23
7

This is more likely to depend on operating system than language. Ultimately any program in any language will get it's memory from the operating system.

I've never heard of an operating system that doesn't recycle memory when a program exits/crashes. So if your program has an upper bound on the memory it needs to allocate, then just allocating and never freeing is perfectly reasonable.

john
  • 85,011
  • 4
  • 57
  • 81
  • Could you screw up the kernel's memory picture in case of a simplistic OS?.. Like, those operating systems without even multitasking. – ulidtko Mar 17 '13 at 23:03
  • @ulidtko, this _will_ screw things up. If my program requires say 1GiB once in a while, and grabs that for the duration, it is denying the use of those resources to others even while not using it. That might matter today, or not. But the environment _will_ change radically. Guaranteed. – vonbrand Mar 18 '13 at 02:35
  • @vonbrand Rare use of 1GiB isn't a problem normally (as long as you've got plenty of physical memory) as modern operating systems can page out the bits that aren't currently active. The problem comes when you've got more virtual memory in _active_ use than you've got physical memory in which to host it. – Donal Fellows Mar 18 '13 at 09:30
5

All operating systems deserving the title will clean up the mess your process made after termination. But there are always unforeseen events, what if it was denied access somehow and some poor programmer did not foresee the possibility and so it doesn't try again a bit later? Always safer to just clean up yourself IF memory leaks are mission critical - otherwise not really worth the effort IMO if that effort is costly.

Edit: You do need to clean up memory leaks if they are in place where they will accumulate, like in loops. The memory leaks I speak of are ones that build up in constant time throughout the course of the program, if you have a leak of any other sort it will most likely be a serious problem sooner or later.

In technical terms if your leaks are of memory 'complexity' O(1) they are fine in most cases, O(logn) already unpleasant (and in some cases fatal) and O(N)+ intolerable.

5

If the program is ever turned into a dynamic component ("plugin") that is loaded into another program's address space, it will be troublesome, even on an operating system with tidy memory management. We don't even have to think about the code being ported to less capable systems.

On the other hand, releasing all memory can impact the performance of a program's cleanup.

One program I was working on, a certain test case required 30 seconds or more for the program to exit, because it was recursing through the graph of all dynamic memory and releasing it piece by piece.

A reasonable solution is to have the capability there and cover it with test cases, but turn it off in production code so the application quits fast.

Kaz
  • 55,781
  • 9
  • 100
  • 149
3

Shared memory on POSIX compliant systems persists until shm_unlink is called or the system is rebooted.

klearn
  • 88
  • 2
  • 3
  • 12
2

If you have interprocess communication, this can lead to other processes never completing and consuming resources depending on the protocol.

To give an example, I was once experimenting with printing to a PDF printer in Java when I terminated the JVM in the middle of a printer job, the PDF spooling process remained active, and I had to kill it in the task manager before I could retry printing.

ratchet freak
  • 47,288
  • 5
  • 68
  • 106