238

Is it ever acceptable to have a memory leak in your C or C++ application?

What if you allocate some memory and use it until the very last line of code in your application (for example, a global object's destructor)? As long as the memory consumption doesn't grow over time, is it OK to trust the OS to free your memory for you when your application terminates (on Windows, Mac, and Linux)? Would this even consider this a real memory leak if the memory was being used continuously until it was freed by the OS?

What if a third party library forced this situation on users? Should one refuse to use that third party library no matter how great it otherwise might be?

I only see one practical disadvantage, and that is that these benign leaks will show up with memory leak detection tools as false positives.

user16217248
  • 3,119
  • 19
  • 19
  • 37
Imbue
  • 3,897
  • 6
  • 40
  • 42
  • 53
    If the memory consumption doesn't grow over time, it's not a leak. – mpez0 Mar 07 '10 at 19:39
  • 4
    Most applications (including all .NET programs) have at least a few buffers that are allocated once and never freed explicitly., so mpez0's definition is more useful. – Ben Voigt Sep 18 '10 at 20:12
  • 2
    Yes, if you have infinite memory. – user Oct 01 '13 at 11:30
  • A "benign" leak (if there is such a thing) is not a false positive -- it's a leak that was very correctly detected. Leak detection, even for leaks you personally don't feel like fixing, is a leak detector's whole reason for existing. – cHao Dec 18 '13 at 00:04
  • 3
    @mpez0 "If the memory consumption doesn't grow over time, it's not a leak"? That's not the definition of a memory leak. A leak is memory that has been leaked, which means it was not freed and you have no reference to it anymore, hence it is impossible for you to ever free it again. Whether it grows or not is irrelevant. – Mecki Aug 20 '18 at 19:36

50 Answers50

339

No.

As professionals, the question we should not be asking ourselves is, "Is it ever OK to do this?" but rather "Is there ever a good reason to do this?" And "hunting down that memory leak is a pain" isn't a good reason.

I like to keep things simple. And the simple rule is that my program should have no memory leaks.

That makes my life simple, too. If I detect a memory leak, I eliminate it, rather than run through some elaborate decision tree structure to determine whether it's an "acceptable" memory leak.

It's similar to compiler warnings – will the warning be fatal to my particular application? Maybe not.

But it's ultimately a matter of professional discipline. Tolerating compiler warnings and tolerating memory leaks is a bad habit that will ultimately bite me in the rear.

To take things to an extreme, would it ever be acceptable for a surgeon to leave some piece of operating equipment inside a patient?

Although it is possible that a circumstance could arise where the cost/risk of removing that piece of equipment exceeds the cost/risk of leaving it in, and there could be circumstances where it was harmless, if I saw this question posted on SurgeonOverflow.com and saw any answer other than "no," it would seriously undermine my confidence in the medical profession.

If a third party library forced this situation on me, it would lead me to seriously suspect the overall quality of the library in question. It would be as if I test drove a car and found a couple loose washers and nuts in one of the cupholders – it may not be a big deal in itself, but it portrays a lack of commitment to quality, so I would consider alternatives.

Wosi
  • 41,986
  • 17
  • 75
  • 82
JohnMcG
  • 8,709
  • 6
  • 42
  • 49
  • 61
    True and not true at the same time. Ultimate most of us are wage slaves and any desire for craftsmanship must take a back seat to the requirements of the business. If that 3rd party library has a leak and saves 2 weeks of work, there may be a business case to use it, etc... – Cervo Nov 07 '08 at 19:28
  • 3
    I would use the library anyway, if it was something I needed and there were no decent alternatives, but I would log a bug with the maintainers. – tloach Nov 11 '08 at 20:08
  • 7
    While I'd personally go with exactly the same answer, there are programs that hardly free memory at all. The reason is that they are a) intended to run on OSes that free memory, and b) designed not to run very long. Rare constraints for a program indeed, but I accept this as perfectly valid. –  Nov 26 '08 at 19:02
  • Basically all mainstream OSes free memory, except when you have shared InterProcess objects and reference counting is used for them (i.e. COM on Windows, for instance). Even DOS, I think, did free memory. I would be curious to know exceptions :-) – Blaisorblade Jan 12 '09 at 03:27
  • It's an ok answer, but the OP is describing a (possibly) memory-inefficient program, not a memory leak. – Robert Paulson Mar 23 '09 at 02:22
  • as an example, python was (is?) notorious for leaking memory. choosing not to use it out of principle would be foolish. – Dustin Getz Aug 17 '09 at 16:22
  • 3
    To add some reasons for early checking: when your debugging tools are flooded with "benign" leaks, how are you going to find the "real" one? If you add a batch feature, and suddenly your 1K/hour leak becomes an 1K / second? – peterchen Dec 21 '09 at 08:28
  • @Dustin: Like C++, it is difficult in python to manage memory in the presence of reference cycles, due to the reference counting scheme it uses. – Arafangion May 10 '10 at 14:13
  • 5
    Hmm is "not leaking memory" "perfect"? – JohnMcG Sep 01 '10 at 14:30
  • 1
    @JohnMcG - pertaining to memory leaks; yes. Perfect: being complete of its kind, without defect. – orokusaki Sep 18 '10 at 20:07
  • 1
    And pertaining to deaths, having a live patient at the end of surgery is "perfect" as well. – JohnMcG Sep 29 '10 at 02:04
  • @JohnMcG: If your program maintained its own Memory Pool (for example), would you really go to the trouble to free the free list before exiting the program? I don't doubt your skills as a professional, but If I were paying for your services, I'd regard that as needless gold plating. – fearless_fool Mar 27 '15 at 20:51
  • 1
    \*reads answer\* \*gets to line `I like to keep things simple. And the simple rule is that my program should have no memory leaks.`\* \*clicks upvote\* – brettwhiteman Apr 29 '15 at 01:39
  • While some points are true, this is an extremist answer. As professionals, we have an obligation to actually produce a product. Please feel free to email all of the brilliant err sorry "lazy" developers behind boost and tell them that they're not being good professionals or programmers before out of the box, boost will print me a book of compiler warnings. –  May 25 '15 at 20:07
  • As most of you know there is often some time constraint to relate to. While I totally agree that memory leaks are bad and sometimes tedious, the question I would rather ask myself is the following; Is tracking down this memory leak worth the cost to do it? Or how much time do I need to spend to replicate this excellent library which has a memory leak? Further, would another library without leaks necessarily need to be better? So there should be a degree of trouble you may have to live with. However it should be kept small. Try to do a search on technical debt. – patrik Aug 11 '15 at 21:01
  • 5
    Clearly late, but I'm amazed at how many keep trying to rationalize their memory leaks and other bad habits rather than accepting the fact that they're things they *shouldn't* do. The fact we may be "wage slaves" is irrelevant. Construction workers are also "wage slaves" and I seriously doubt anyone would try to rationalize *bad practices* in civil engineering --with building quality at risk. There's always a "business case" for taking "shortcuts". Keep in mind that you're all working for a *customer* that is paying you to do your job *properly*. – code_dredd Nov 06 '15 at 19:02
  • @ray Agreed. The problem is that we keep letting managers get away with urging devs to take shortcuts and skimp on quality control because everything always has to be delivered yesterday and few people seem to look beyond the short-term development cost to the long-term maintenance cost. In civil engineering, when a bridge or building collapses everyone sees the damage and carnage. In software, when a process dies due to memory leaks every two days, we write a script to restart it overnight or get the "server guy" to reboot things. At least it's good business for authors of monitoring tools. – G_H May 24 '17 at 07:11
  • 1
    @G_H Maybe the deeper problem is that (generally) incompetent bosses keep getting away with being incompetent/ignorant about what they're managing. I used to have a boss who really thought that what was being requested was "simple" simply b/c he could express his needs in simple terms. By his reasoning, I wouldn't be surprised if he thought a request such as "I want a button that can solve all my problems" should be piece of cake too. My solution would be for the program to provide the user a handgun with a single bullet, but I digress. – code_dredd May 24 '17 at 07:18
  • 1
    @ray That is so painfully accurate I can't even bring myself to fully comment on it right now. Technical managers should come from a technical background, but it seems it's mostly the types that aren't passionate about coding and software engineering are the ones that want to climb the corporate ladder ASAP to get out of the "coding trenches". – G_H May 24 '17 at 10:55
  • 1
    @ray - Citing the building industry as a model of intolerance of bad practice is naive to say the least. – Jeremy Jul 31 '17 at 11:55
  • @ray - "when a bridge or building collapses everyone sees the damage and carnage" somewhat misses the point itself, because that kind of catastrophic failure is the exception to the more insidious norm - just as in the software world. The world is full of bridges and buildings that haven't collapsed, but which were poorly built and exhibit structural defects that require varying degrees of maintenance and repair to keep them serviceable - analogous in its way to periodically rebooting a server. And clearly someone, somewhere, _did_ rationalise - at least to themselves - doing it that way. – Jeremy Jul 31 '17 at 15:16
  • @Jeremy I'll summarize and hope I don't have to elaborate much more than this: it was an analogy; no analogy is perfect. That said, mem leaks make programs unusable and can (eventually) bring down the whole system to its knees or get the process killed, similar to how poorly constructed buildings can be unusable or even collapse. The point here is that when the OS kills a process, no one bats an eye, but when buildings collapse, everybody loses their minds. The engineers might even lose their licenses (in the US), so not destroying their carreers is an insentive to do it right the 1st time. – code_dredd Jul 31 '17 at 15:27
  • Hi @JohnMcG Could you please help me on this https://stackoverflow.com/questions/73597705/how-can-i-fix-this-memory-leak-issue? Thank you! – Mark Smith Sep 04 '22 at 14:45
82

I don't consider it to be a memory leak unless the amount of memory being "used" keeps growing. Having some unreleased memory, while not ideal, is not a big problem unless the amount of memory required keeps growing.

Jim C
  • 4,981
  • 21
  • 25
  • Technically, it's still a leak because the rest of the system can't use that memory. – Bill the Lizard Nov 07 '08 at 19:08
  • 14
    Technically, a leak is memory that is allocated and all references to it are lost. Not deallocating the memory at the end is just lazy. – Martin York Nov 07 '08 at 19:13
  • 20
    If you have a 1-time memory leak of 4 GB, that's a problem. – John Dibling Nov 07 '08 at 19:16
  • 23
    Doesn't matter if it's growing or not. Other programs can't use the memory if you have it allocated. – Bill the Lizard Nov 07 '08 at 19:17
  • 1
    But his application is using that memory until it exits. I think he just means he didn't keep the initial pointer returned from the allocation. The object is still being useful "until the very last line of code in [the] application" so freeing it is not desired until the app exits. – sk. Nov 07 '08 at 19:23
  • 1
    @sk: Then that's perfectly okay. Whatever function uses the memory last should clean it up. – Bill the Lizard Nov 07 '08 at 19:47
  • 1
    I think the term for this situation is a memory "pool," capturing that there is some memory that has not been de-allocated, but it is not growing. – JohnMcG Nov 07 '08 at 22:15
  • The application starts, it allocates memory, and the pointer is kept. From there a half dozen global objects use that pointer continuously and in their deconstructors. How can the last function free the memory? – Imbue Nov 07 '08 at 22:36
  • 8
    > Other programs can't use the memory if you have it allocated. Well, the OS can always swap your memory to disk, and allow other applications to use the RAM you weren't taking advantage of. – Max Lybbert Nov 07 '08 at 23:46
  • 1
    Paging is not a desirable state to be in. You can't just let your programs hang on to whatever memory they want and count on the OS to bail you out. If you deallocate memory you're not using the OS doesn't need to spend time paging, which leads to better performance for all applications running. – Bill the Lizard Nov 08 '08 at 00:38
  • 1
    @Imbue: If you're using the memory up until the program ends, then you're doing it right. – Bill the Lizard Nov 08 '08 at 00:40
  • 4
    If the program is very short-lived, then a leak might not be so bad. Also, while NOT ideal, paging isn't as expensive as some here make it out to be, because the program isn't interested in that memory (And thus wont be swapping all the time) - unless, of course, you have a GC... – Arafangion Mar 27 '09 at 00:51
  • 1
    _"I don't consider it to be a memory leak unless the amount of memory being "used" keeps growing. "_ 'till your code gets migrated to a class method and instantiated 1000 times. Then you have a code that literally bleeds from a 1000 wounds. – mg30rg Jun 01 '15 at 14:09
  • *"I don't consider it to be a memory leak unless the amount of memory being "used" keeps growing"* Can't disagree more. A memory leak is not defined in terms of whether it grows or not; it's whether refs to allocated memory have been lost prior to it being deallocated. And it is a bug. That's almost like saying that a house leaning/sinking on one side is "fine" as long as it does not keep sinking, as if it had been "acceptable" for it to sink even a bit. – code_dredd May 24 '17 at 22:25
  • @MaxLybbert "the OS can always swap your memory to disk, and allow other applications to use the RAM you weren't taking advantage of." <- you speak about one of the slowest thing your OS can do.. – kingsjester Nov 09 '22 at 11:05
79

Let's get our definitions correct, first. A memory leak is when memory is dynamically allocated, eg with malloc(), and all references to the memory are lost without the corresponding free. An easy way to make one is like this:

#define BLK ((size_t)1024)
while(1){
    void * vp = malloc(BLK);
}

Note that every time around the while(1) loop, 1024 (+overhead) bytes are allocated, and the new address assigned to vp; there's no remaining pointer to the previous malloc'ed blocks. This program is guaranteed to run until the heap runs out, and there's no way to recover any of the malloc'ed memory. Memory is "leaking" out of the heap, never to be seen again.

What you're describing, though, sound like

int main(){
    void * vp = malloc(LOTS);
    // Go do something useful
    return 0;
}

You allocate the memory, work with it until the program terminates. This is not a memory leak; it doesn't impair the program, and all the memory will be scavenged up automagically when the program terminates.

Generally, you should avoid memory leaks. First, because like altitude above you and fuel back at the hangar, memory that has leaked and can't be recovered is useless; second, it's a lot easier to code correctly, not leaking memory, at the start than it is to find a memory leak later.

Charlie Martin
  • 110,348
  • 25
  • 193
  • 263
  • Now consider a few dozen of this allocations. Now consider having to move the "main" body to routine that gets called multiple times. Enjoy. - I agree with the sentiment that it's nto such a big problem in this scenario, but scenarios change. As they say, always write code as if the guy to maintain it knows where you live. – peterchen Dec 21 '09 at 10:44
  • 2
    Well, the point is that memory that is malloc'ed and held until the program calls _exit() isn't "leaked". – Charlie Martin Dec 30 '09 at 01:55
  • 1
    It is a memory leak and it can impair your program. Future allocations can fail from this proces because I am sure you are checking that malloc returned non nil everywhere. by over using memory, such as in an embedded situation where memor is scarce this could be the difference between life and death. – MikeJ Mar 07 '10 at 19:26
  • 11
    Mike, that's just not true. In a compliant C environment, ending main frees all process resources. In an embedded environment like you describe, you might see that situation, but you wouldn't have a main. Now, I'll grant that there might be flawed embedded environments for which this wouldn't be true, but then I've seen flawed environments that couldn't cope with += correctly too. – Charlie Martin Mar 11 '10 at 22:03
  • True - until your code gets migrated to a class method and instantiated 1000 times. Then you have a code that literally bleeds from a 1000 wounds. – mg30rg Jun 01 '15 at 14:11
  • 3
    Yes, you have now discovered that if you `malloc` too much memory it's a Bad Thing. It's still not a *leak*. It's not a *leak* until and unless it's `malloc`d memory to which the reference is lost. – Charlie Martin Jun 01 '15 at 19:44
  • @CharlieMartin: I'm used to having /* noreturn */ void main() in embedded environments. – Joshua Jul 17 '18 at 22:19
  • @Joshua yeah, and that's a bit of weirdness. I'd be surprised of having `return 0` caused a problem. Appalled, really. – Charlie Martin Jul 17 '18 at 22:23
  • @Charlie: Well if you want to halt the embedded environment. (Hint: you don't.) – Joshua Jul 18 '18 at 00:13
  • @Joshua Actually, I'm a little surprised at an embedded program having a `main`. What environment are you thinking of? – Charlie Martin Jul 18 '18 at 16:06
  • @CharlieMartin: Microchip. It had some ASM startup code that called main() with no parameters. Returning from main() went to an infinite loop. – Joshua Jul 18 '18 at 16:07
  • @Joshua Okay. I think I'd call that a design flaw or a bug -- I'd think ... jeez, I don't know what I'd think. It's not like `main` wouldn't return when you got to the `}` anyway. Did you have to write a `while(1)` inside the `main`? – Charlie Martin Jul 18 '18 at 16:11
41

In theory no, in practise it depends.

It really depends on how much data the program is working on, how often the program is run and whether or not it is running constantly.

If I have a quick program that reads a small amount of data makes a calculation and exits, a small memory leak will never be noticed. Because the program is not running for very long and only uses a small amount of memory, the leak will be small and freed when the program exists.

On the other hand if I have a program that processes millions of records and runs for a long time, a small memory leak might bring down the machine given enough time.

As for third party libraries that have leaks, if they cause a problem either fix the library or find a better alternative. If it doesn't cause a problem, does it really matter?

vfilby
  • 9,938
  • 9
  • 49
  • 62
  • I don't know if you read my whole question or not. I'm saying that the memory is used until the very end of the application. It doesn't grow with time. The only no no is that there isn't a call to free/delete. – Imbue Nov 07 '08 at 19:07
  • 2
    Then it isn't really a memory leak. A memory leak is small amounts of unused but unfreed memory, this amount gets greater over time. What you are talking about is a memory droplet. Do not concern yourself with this unless your droplet is very large. – vfilby Nov 07 '08 at 19:10
  • "If it doesn't cause a problem, does it really matter?" Nope, it doesn't matter at all. I wish more people got that instead of getting religious. – Imbue Nov 07 '08 at 22:31
  • @Imbue -- don't ask a question if you don't want it to be answered. If you're fine with the memory pool or leak, bully for you. But many of us have had to work long hours correcting bugs a lazy developer had decided "doesn't cause a problem." – JohnMcG Nov 07 '08 at 22:54
  • 2
    @John: That is generally less a question of lazy developers and more a question of evolving software. We all make mistakes, bugs are our trade; we make them we fix them, that is what we do. It is always a balance between upfront cost and long-term maintenance, that balance is never straightforward. – vfilby Nov 07 '08 at 23:01
  • My point is that "religion" is there for a reason. Could I imagine a circumstance where I would release software with a memory leak or pool? Yes. Do I want to write on a public board that this is ok? No. – JohnMcG Nov 07 '08 at 23:32
  • > [T]he memory is used until the very end of the application. If it's used, it's not a leak. > It doesn't grow with time. The growing is usually caused by the same code leaking multiple times. > [T]here isn't a call to free/delete. All modern OSes will free the memory on program exit. – Max Lybbert Nov 07 '08 at 23:49
  • Argh, formatting screwy in previous comment. – Max Lybbert Nov 07 '08 at 23:51
  • If you're using MFC (I'll assume the OP is since he mentions C and C++) memory leaks are pretty much unavoidable. I personally have tracked several right into MFC and had to just "let them go." In my expirence, ATL is better but more difficult to work with. – BoltBait Nov 07 '08 at 23:58
  • @John, it is a balance between cost and quality. Actually it is really straightforward. Do I want to write perfect code? Yes. Can customers afford perfect code? Generally no, or at least it isn't a good choice for them. It is a question of practicality and realism, here mem leaks are acceptable. – vfilby Nov 08 '08 at 05:39
  • hmm I think I just contradicted myself, I should have chose a different word rather than 'straightforward' in my previous, previous comment to JohnMcG. It should probably read the balance is never simple. – vfilby Nov 08 '08 at 05:41
  • 1
    John, I 100% agree with you.. Imbum The question is almost, "how much do you accept". Sloppy is sloppy.. How about I leave a shrimp behind your monitor. stink is stink. Every time we cave, our industry caves a bit. If you know there's a leak and you know you caused it, then you should fix it. – baash05 Jan 19 '09 at 04:03
37

Many people seem to be under the impression that once you free memory, it's instantly returned to the operating system and can be used by other programs.

This isn't true. Operating systems commonly manage memory in 4KiB pages. malloc and other sorts of memory management get pages from the OS and sub-manage them as they see fit. It's quite likely that free() will not return pages to the operating system, under the assumption that your program will malloc more memory later.

I'm not saying that free() never returns memory to the operating system. It can happen, particularly if you are freeing large stretches of memory. But there's no guarantee.

The important fact: If you don't free memory that you no longer need, further mallocs are guaranteed to consume even more memory. But if you free first, malloc might re-use the freed memory instead.

What does this mean in practice? It means that if you know your program isn't going to require any more memory from now on (for instance it's in the cleanup phase), freeing memory is not so important. However if the program might allocate more memory later, you should avoid memory leaks - particularly ones that can occur repeatedly.

Also see this comment for more details about why freeing memory just before termination is bad.

A commenter didn't seem to understand that calling free() does not automatically allow other programs to use the freed memory. But that's the entire point of this answer!

So, to convince people, I will demonstrate an example where free() does very little good. To make the math easy to follow, I will pretend that the OS manages memory in 4000 byte pages.

Suppose you allocate ten thousand 100-byte blocks (for simplicity I'll ignore the extra memory that would be required to manage these allocations). This consumes 1MB, or 250 pages. If you then free 9000 of these blocks at random, you're left with just 1000 blocks - but they're scattered all over the place. Statistically, about 5 of the pages will be empty. The other 245 will each have at least one allocated block in them. That amounts to 980KB of memory, that cannot possibly be reclaimed by the operating system - even though you now only have 100KB allocated!

On the other hand, you can now malloc() 9000 more blocks without increasing the amount of memory your program is tying up.

Even when free() could technically return memory to the OS, it may not do so. free() needs to achieve a balance between operating quickly and saving memory. And besides, a program that has already allocated a lot of memory and then freed it is likely to do so again. A web server needs to handle request after request after request - it makes sense to keep some "slack" memory available so you don't need to ask the OS for memory all the time.

Community
  • 1
  • 1
Artelius
  • 48,337
  • 13
  • 89
  • 105
  • 1
    What if, other programs require the memory which your program is holding up unnecessarily, hence even though you might not need any more mallocs, free() the unused memory spaces :) – M.N Feb 27 '09 at 06:11
  • 2
    You've totally missed my point. When you free() memory, other programs cannot use it!! (Sometimes they can, particularly if you free large blocks of memory. But often, they can't!) I will edit my post to make this clearer. – Artelius Mar 23 '09 at 00:17
27

There is nothing conceptually wrong with having the os clean up after the application is run.

It really depends on the application and how it will be run. Continually occurring leaks in an application that needs to run for weeks has to be taken care of, but a small tool that calculates a result without too high of a memory need should not be a problem.

There is a reason why many scripting language do not garbage collect cyclical references… for their usage patterns, it's not an actual problem and would thus be as much of a waste of resources as the wasted memory.

sharptooth
  • 167,383
  • 100
  • 513
  • 979
kasperjj
  • 3,632
  • 27
  • 25
  • About scripting languages: Python uses refcounting but has a GC just to free cyclical references. In other languages, the programmer often avoids explicitly cyclical references altogether, which creates other problems. – Blaisorblade Jan 12 '09 at 03:39
  • The earlier versions of PHP didn't release memory, they just ran from start to end growing in memory - after the typically 0.1 seconds of execution time, the script would exit, and all memory would be reclaimed. – Arafangion Mar 27 '09 at 00:52
19

I believe the answer is no, never allow a memory leak, and I have a few reasons which I haven't seen explicitly stated. There are great technical answers here but I think the real answer hinges on more social/human reasons.

(First, note that as others mentioned, a true leak is when your program, at any point, loses track of memory resources that it has allocated. In C, this happens when you malloc() to a pointer and let that pointer leave scope without doing a free() first.)

The important crux of your decision here is habit. When you code in a language that uses pointers, you're going to use pointers a lot. And pointers are dangerous; they're the easiest way to add all manner of severe problems to your code.

When you're coding, sometimes you're going to be on the ball and sometimes you're going to be tired or mad or worried. During those somewhat distracted times, you're coding more on autopilot. The autopilot effect doesn't differentiate between one-off code and a module in a larger project. During those times, the habits you establish are what will end up in your code base.

So no, never allow memory leaks for the same reason that you should still check your blind spots when changing lanes even if you're the only car on the road at the moment. During times when your active brain is distracted, good habits are all that can save you from disastrous missteps.

Beyond the "habit" issue, pointers are complex and often require a lot of brain power to track mentally. It's best to not "muddy the water" when it comes to your usage of pointers, especially when you're new to programming.

There's a more social aspect too. By proper use of malloc() and free(), anyone who looks at your code will be at ease; you're managing your resources. If you don't, however, they'll immediately suspect a problem.

Maybe you've worked out that the memory leak doesn't hurt anything in this context, but every maintainer of your code will have to work that out in his head too when he reads that piece of code. By using free() you remove the need to even consider the issue.

Finally, programming is writing a mental model of a process to an unambiguous language so that a person and a computer can perfectly understand said process. A vital part of good programming practice is never introducing unnecessary ambiguity.

Smart programming is flexible and generic. Bad programming is ambiguous.

Jason L
  • 2,908
  • 3
  • 21
  • 19
  • I love the habit idea. I also agree. If I see a memory leak, I always wonder what other corner did the coder cut. Especially if it's obvious – baash05 Jan 19 '09 at 04:16
  • This is the best answer by far. I have been programming in C++ for 5 years now and I have never written a single memory leak. The reason is that I do not write code that tends to leak memory. Good C++ design has you rarely use `new`, so that eliminates most memory leaks right away. Only if you absolutely must do you use `new`. The result of that `new` must be immediately placed into a smart pointer. If you follow those two rules, you will simply never leak memory (barring a bug in a library). The only case remaining is `shared_ptr` cycles, in which case you have to know to use `weak_ptr`. – David Stone Aug 12 '12 at 23:25
18

I'm going to give the unpopular but practical answer that it's always wrong to free memory unless doing so will reduce the memory usage of your program. For instance a program that makes a single allocation or series of allocations to load the dataset it will use for its entire lifetime has no need to free anything. In the more common case of a large program with very dynamic memory requirements (think of a web browser), you should obviously free memory you're no longer using as soon as you can (for instance closing a tab/document/etc.), but there's no reason to free anything when the user selects clicks "exit", and doing so is actually harmful to the user experience.

Why? Freeing memory requires touching memory. Even if your system's malloc implementation happens not to store metadata adjacent to the allocated memory blocks, you're likely going to be walking recursive structures just to find all the pointers you need to free.

Now, suppose your program has worked with a large volume of data, but hasn't touched most of it for a while (again, web browser is a great example). If the user is running a lot of apps, a good portion of that data has likely been swapped to disk. If you just exit(0) or return from main, it exits instantly. Great user experience. If you go to the trouble of trying to free everything, you may spend 5 seconds or more swapping all the data back in, only to throw it away immediately after that. Waste of user's time. Waste of laptop's battery life. Waste of wear on the hard disk.

This is not just theoretical. Whenever I find myself with too many apps loaded and the disk starts thrashing, I don't even consider clicking "exit". I get to a terminal as fast as I can and type killall -9 ... because I know "exit" will just make it worse.

R.. GitHub STOP HELPING ICE
  • 208,859
  • 35
  • 376
  • 711
  • 5
    Love this quote from Raymond Chen: "The building is being demolished. Don't bother sweeping the floor and emptying the trash cans and erasing the whiteboards. And don't line up at the exit to the building so everybody can move their in/out magnet to out. All you're doing is making the demolition team wait for you to finish these pointless housecleaning tasks." (https://blogs.msdn.microsoft.com/oldnewthing/20120105-00/?p=8683) – Andreas Magnusson Jun 08 '16 at 14:20
15

I think in your situation the answer may be that it's okay. But you definitely need to document that the memory leak is a conscious decision. You don't want a maintenance programmer to come along, slap your code inside a function, and call it a million times. So if you make the decision that a leak is okay you need to document it (IN BIG LETTERS) for whoever may have to work on the program in the future.

If this is a third party library you may be trapped. But definitely document that this leak occurs.

But basically if the memory leak is a known quantity like a 512 KB buffer or something then it is a non issue. If the memory leak keeps growing like every time you call a library call your memory increases by 512KB and is not freed, then you may have a problem. If you document it and control the number of times the call is executed it may be manageable. But then you really need documentation because while 512 isn't much, 512 over a million calls is a lot.

Also you need to check your operating system documentation. If this was an embedded device there may be operating systems that don't free all the memory from a program that exits. I'm not sure, maybe this isn't true. But it is worth looking into.

Cervo
  • 3,112
  • 1
  • 24
  • 27
  • 3
    "But you definitely need to document that the memory leak is a conscious decision." Thank heavens. The best point made so far. – pestophagous Nov 07 '08 at 23:28
11

I'm sure that someone can come up with a reason to say Yes, but it won't be me. Instead of saying no, I'm going to say that this shouldn't be a yes/no question. There are ways to manage or contain memory leaks, and many systems have them.

There are NASA systems on devices that leave the earth that plan for this. The systems will automatically reboot every so often so that memory leaks will not become fatal to the overall operation. Just an example of containment.

pearcewg
  • 9,545
  • 21
  • 79
  • 125
8

If you allocate memory and use it until the last line of your program, that's not a leak. If you allocate memory and forget about it, even if the amount of memory isn't growing, that's a problem. That allocated but unused memory can cause other programs to run slower or not at all.

Bill the Lizard
  • 398,270
  • 210
  • 566
  • 880
  • Not really, since if it's unused, it will just get paged out. When the app exits, all the memory is released. – Eclipse Nov 07 '08 at 19:59
  • As long as it's allocated other programs won't be able to use it. It won't get paged out if you don't deallocate it. – Bill the Lizard Nov 07 '08 at 20:02
  • Of course it will - that's what virtual memory is all about. You can have 1 GB of actual RAM, and yet have 4 processes each fully allocating 2 GB of virtual memory (so long as your page file is big enough). – Eclipse Nov 07 '08 at 20:06
  • Of course, you'll get nasty paging problems if each of those processes are actively using all that memory. – Eclipse Nov 07 '08 at 20:06
  • Okay, I understand what you're talking about now. If you deallocate memory you're not using, you'll reduce the need for paging. If you keep it allocated, your application will still keep it when it's paged back in. – Bill the Lizard Nov 07 '08 at 20:21
8

I can count on one hand the number of "benign" leaks that I've seen over time.

So the answer is a very qualified yes.

An example. If you have a singleton resource that needs a buffer to store a circular queue or deque but doesn't know how big the buffer will need to be and can't afford the overhead of locking or every reader, then allocating an exponentially doubling buffer but not freeing the old ones will leak a bounded amount of memory per queue/deque. The benefit for these is they speed up every access dramatically and can change the asymptotics of multiprocessor solutions by never risking contention for a lock.

I've seen this approach used to great benefit for things with very clearly fixed counts such as per-CPU work-stealing deques, and to a much lesser degree in the buffer used to hold the singleton /proc/self/maps state in Hans Boehm's conservative garbage collector for C/C++, which is used to detect the root sets, etc.

While technically a leak, both of these cases are bounded in size and in the growable circular work stealing deque case there is a huge performance win in exchange for a bounded factor of 2 increase in the memory usage for the queues.

Edward Kmett
  • 29,632
  • 7
  • 85
  • 107
8

If you allocate a bunch of heap at the beginning of your program, and you don't free it when you exit, that is not a memory leak per se. A memory leak is when your program loops over a section of code, and that code allocates heap and then "loses track" of it without freeing it.

In fact, there is no need to make calls to free() or delete right before you exit. When the process exits, all of its memory is reclaimed by the OS (this is certainly the case with POSIX. On other OSes – particularly embedded ones – YMMV).

The only caution I'd have with not freeing the memory at exit time is that if you ever refactor your program so that it, for example, becomes a service that waits for input, does whatever your program does, then loops around to wait for another service call, then what you've coded can turn into a memory leak.

sharptooth
  • 167,383
  • 100
  • 513
  • 979
nsayer
  • 16,925
  • 3
  • 33
  • 51
  • I beg to differ. That *is* “a memory leak per se”. – Konrad Rudolph Nov 07 '08 at 22:15
  • It's not a leak until you "lose" the reference to the object. Presumably, if the memory is used for the lifetime of the program, then it's not leaked. If the reference is not lost until exit() is called, then it is absolutely *not* a leak. – nsayer Nov 07 '08 at 23:12
  • Amiga DOS was the last O/S I looked at that didn't clean up behind processes. Be aware, though, that System V IPC shared memory can be left around even if no process is using it. – Jonathan Leffler Nov 08 '08 at 19:40
  • Palm doesn't free memory "leaked" until you hotsync. it came well after the amiga. I've run apps on my palm emulator that had leaks.. Never did they make their way to my actual palm. – baash05 Jan 19 '09 at 04:08
6

In this sort of question context is everything. Personally I can't stand leaks, and in my code I go to great lengths to fix them if they crop up, but it is not always worth it to fix a leak, and when people are paying me by the hour I have on occasion told them it was not worth my fee for me to fix a leak in their code. Let me give you an example:

I was triaging a project, doing some perf work and fixing a lot of bugs. There was a leak during the applications initialization that I tracked down, and fully understood. Fixing it properly would have required a day or so refactoring a piece of otherwise functional code. I could have done something hacky (like stuffing the value into a global and grabbing it some point I know it was no longer in use to free), but that would have just caused more confusion to the next guy who had to touch the code.

Personally I would not have written the code that way in the first place, but most of us don't get to always work on pristine well designed codebases, and sometimes you have to look at these things pragmatically. The amount of time it would have taken me to fix that 150 byte leak could instead be spent making algorithmic improvements that shaved off megabytes of ram.

Ultimately, I decided that leaking 150 bytes for an app that used around a gig of ram and ran on a dedicated machine was not worth fixing it, so I wrote a comment saying that it was leaked, what needed to be changed in order to fix it, and why it was not worth it at the time.

Louis Gerbarg
  • 43,356
  • 8
  • 80
  • 90
  • Smart. Especially since the leak was during initialization, which means that it would not accumulate over the runtime of the application. – Demi Nov 16 '13 at 02:53
5

this is so domain-specific that its hardly worth answering. use your freaking head.

  • space shuttle operating system: nope, no memory leaks allowed
  • rapid development proof-of-concept code: fixing all those memory leaks is a waste of time.

and there is a spectrum of intermediate situations.

the opportunity cost ($$$) of delaying a product release to fix all but the worst memory leaks is usually dwarfs any feelings of being "sloppy or unprofessional". Your boss pays you to make him money, not to get a warm, fuzzy feelings.

Dustin Getz
  • 21,282
  • 15
  • 82
  • 131
  • 2
    Very short-sighted attitude. You're basically saying that there is no need to use fundamentally sound programming practices until a defect is found to be caused by those practices. Problem is that software that is written using sloppy methods tends to have more defects than software that isn't. – John Dibling Nov 07 '08 at 19:23
  • 1
    I don't believe that all. And memory management is more complicated than writing clean methods. – Dustin Getz Nov 07 '08 at 20:40
  • 1
    Dustin obviously works in the real world like most of us, where we perpetually work against insane deadlines to keep up with the competition. So dealing with bugs should be done in a pragmatic way. By wasting too much time on unimportant bugs in unimportant programs, you won't get your stuff done. – Wouter van Nifterick Jan 07 '09 at 08:08
  • 1
    The problem with this attitude is: when do you start fixing the leaks? *"OK, it's a powerplant, but it's just coal, not Uranium. Why fix leaks here?"* - I learnt in the real world that if you don't do the right thing from the very beginning, all the time, it just never happens. That attitude breeds projects that are "99% complete" after two weeks and remain so for two months. – peterchen Dec 21 '09 at 08:21
  • So are you planning to fix the issues when you have a bunch of them, one built upon the other? – Alberto Salvia Novella Dec 16 '20 at 01:51
5

You have to first realize that there's a big difference between a perceived memory leak and an actual memory leak. Very frequently analysis tools will report many red herrings, and label something as having been leaked (memory or resources such as handles etc) where it actually isn't. Often times this is due to the analysis tool's architecture. For example, certain analysis tools will report run time objects as memory leaks because it never sees those object freed. But the deallocation occurs in the runtime's shutdown code, which the analysis tool might not be able to see.

With that said, there will still be times when you will have actual memory leaks that are either very difficult to find or very difficult to fix. So now the question becomes is it ever OK to leave them in the code?

The ideal answer is, "no, never." A more pragmatic answer may be "no, almost never." Very often in real life you have limited number of resources and time to resolve and endless list of tasks. When one of the tasks is eliminating memory leaks, the law of diminishing returns very often comes in to play. You could eliminate say 98% of all memory leaks in an application in a week, but the remaining 2% might take months. In some cases it might even be impossible to eliminate certain leaks because of the application's architecture without a major refactoring of code. You have to weigh the costs and benefits of eliminating the remaining 2%.

John Dibling
  • 99,718
  • 31
  • 186
  • 324
5

While most answers concentrate on real memory leaks (which are not OK ever, because they are a sign of sloppy coding), this part of the question appears more interesting to me:

What if you allocate some memory and use it until the very last line of code in your application (for example, a global object's deconstructor)? As long as the memory consumption doesn't grow over time, is it OK to trust the OS to free your memory for you when your application terminates (on Windows, Mac, and Linux)? Would you even consider this a real memory leak if the memory was being used continuously until it was freed by the OS.

If the associated memory is used, you cannot free it before the program ends. Whether the free is done by the program exit or by the OS does not matter. As long as this is documented, so that change don't introduce real memory leaks, and as long as there is no C++ destructor or C cleanup function involved in the picture. A not-closed file might be revealed through a leaked FILE object, but a missing fclose() might also cause the buffer not to be flushed.

So, back to the original case, it is IMHO perfectly OK in itself, so much that Valgrind, one of the most powerful leak detectors, will treat such leaks only if requested. On Valgrind, when you overwrite a pointer without freeing it beforehand, it gets considered as a memory leak, because it is more likely to happen again and to cause the heap to grow endlessly.

Then, there are not nfreed memory blocks which are still reachable. One could make sure to free all of them at the exit, but that is just a waste of time in itself. The point is if they could be freed before. Lowering memory consumption is useful in any case.

Blaisorblade
  • 6,438
  • 1
  • 43
  • 76
4

I agree with vfilby – it depends. In Windows, we treat memory leaks as relatively serous bugs. But, it very much depends on the component.

For example, memory leaks are not very serious for components that run rarely, and for limited periods of time. These components run, do theire work, then exit. When they exit all their memory is freed implicitly.

However, memory leaks in services or other long run components (like the shell) are very serious. The reason is that these bugs 'steal' memory over time. The only way to recover this is to restart the components. Most people don't know how to restart a service or the shell – so if their system performance suffers, they just reboot.

So, if you have a leak – evaluate its impact two ways

  1. To your software and your user's experience.
  2. To the system (and the user) in terms of being frugal with system resources.
  3. Impact of the fix on maintenance and reliability.
  4. Likelihood of causing a regression somewhere else.

Foredecker

Foredecker
  • 7,395
  • 4
  • 29
  • 30
4

Even if you are sure that your 'known' memory leak will not cause havoc, don't do it. At best, it will pave a way for you to make a similar and probably more critical mistake at a different time and place.

For me, asking this is like questioning "Can I break the red light at 3 AM in the morning when no one is around?". Well sure, it may not cause any trouble at that time but it will provide a lever for you to do the same in rush hour!

Ather
  • 1,600
  • 11
  • 17
4

No, you should not have leaks that the OS will clean for you. The reason (not mentioned in the answers above as far as I could check) is that you never know when your main() will be re-used as a function/module in another program. If your main() gets to be a frequently-called function in another persons' software - this software will have a memory leak that eats memory over time.

KIV

4

I'm surprised to see so many incorrect definitions of what a memory leak actually is. Without a concrete definition, a discussion on whether it's a bad thing or not will go nowhere.

As some commentors have rightly pointed out, a memory leak only happens when memory allocated by a process goes out of scope to the extent that the process is no longer able to reference or delete it.

A process which is grabbing more and more memory is not necessarily leaking. So long as it is able to reference and deallocate that memory, then it remains under the explicit control of the process and has not leaked. The process may well be badly designed, especially in the context of a system where memory is limited, but this is not the same as a leak. Conversely, losing scope of, say, a 32 byte buffer is still a leak, even though the amount of memory leaked is small. If you think this is insignificant, wait until someone wraps an algorithm around your library call and calls it 10,000 times.

I see no reason whatsoever to allow leaks in your own code, however small. Modern programming languages such as C and C++ go to great lengths to help programmers prevent such leaks and there is rarely a good argument not to adopt good programming techniques - especially when coupled with specific language facilities - to prevent leaks.

As regards existing or third party code, where your control over quality or ability to make a change may be highly limited, depending on the severity of the leak, you may be forced to accept or take mitigating action such as restarting your process regularly to reduce the effect of the leak.

It may not be possible to change or replace the existing (leaking) code, and therefore you may be bound to accept it. However, this is not the same as declaring that it's OK.

Component 10
  • 10,247
  • 7
  • 47
  • 64
3

I guess it's fine if you're writing a program meant to leak memory (i.e. to test the impact of memory leaks on system performance).

Beep beep
  • 18,873
  • 12
  • 63
  • 78
2

I only see one practical disadvantage, and that is that these benign leaks will show up with memory leak detection tools as false positives.

If I understood correctly, you don't explicitly free memory (which can be freed because you still have a pointer) and rely on OS to free it during process termination. Though this may seem okay for simple program, consider the situation where your code is moved into a library and becomes a part of some resident daemon process running 24/7. Say this daemon spawns a thread each time it needs to do something useful using your code and say it spawns thousands of threads every hour. In this case you will get real memory leak.

Unfortunately, this situation is not unlikely in the real life and consistent memory management techniques may make your life easier.

Manu343726
  • 13,969
  • 4
  • 40
  • 75
Dmitry Krivenok
  • 326
  • 3
  • 10
2

Its really not a leak if its intentional and its not a problem unless its a significant amount of memory, or could grow to be a significant amount of memory. Its fairly common to not cleanup global allocations during the lifetime of a program. If the leak is in a server or long running app, grows over time, then its a problem.

Sanjaya R
  • 6,246
  • 2
  • 17
  • 19
2

I think you've answered your own question. The biggest drawback is how they interfere with the memory leak detection tools, but I think that drawback is a HUGE drawback for certain types of applications.

I work with legacy server applications that are supposed to be rock solid but they have leaks and the globals DO get in the way of the memory detection tools. It's a big deal.

In the book "Collapse" by Jared Diamond, the author wonders about what the guy was thinking who cut down the last tree on Easter Island, the tree he would have needed in order to build a canoe to get off the island. I wonder about the day many years ago when that first global was added to our codebase. THAT was the day it should have been caught.

Corey Trager
  • 22,649
  • 18
  • 83
  • 121
2

I see the same problem as all scenario questions like this: What happens when the program changes, and suddenly that little memory leak is being called ten million times and the end of your program is in a different place so it does matter? If it's in a library then log a bug with the library maintainers, don't put a leak into your own code.

tloach
  • 8,009
  • 1
  • 33
  • 44
  • In that case the impact of the memory leak changes, and you need to re-evaluate the priority of plugging the leak. – John Dibling Nov 07 '08 at 19:18
  • @John: You better at least document the leak then. Even then, I wouldn't trust someone to not ignore a big red flashing comment and copy-and-paste leaky code anyway. I prefer not to give someone the ability to do that in the first place. – tloach Nov 07 '08 at 19:44
2

I'll answer no.

In theory, the operating system will clean up after you if you leave a mess (now that's just rude, but since computers don't have feelings it might be acceptable). But you can't anticipate every possible situation that might occur when your program is run. Therefore (unless you are able to conduct a formal proof of some behaviour), creating memory leaks is just irresponsible and sloppy from a professional point of view.

If a third-party component leaks memory, this is a very strong argument against using it, not only because of the imminent effect but also because it shows that the programmers work sloppily and that this might also impact other metrics. Now, when considering legacy systems this is difficult (consider web browsing components: to my knowledge, they all leak memory) but it should be the norm.

Konrad Rudolph
  • 530,221
  • 131
  • 937
  • 1,214
2

Historically, it did matter on some operating systems under some edge cases. These edge cases could exist in the future.

Here's an example, on SunOS in the Sun 3 era, there was an issue if a process used exec (or more traditionally fork and then exec), the subsequent new process would inherit the same memory footprint as the parent and it could not be shrunk. If a parent process allocated 1/2 gig of memory and didn't free it before calling exec, the child process would start using that same 1/2 gig (even though it wasn't allocated). This behavior was best exhibited by SunTools (their default windowing system), which was a memory hog. Every app that it spawned was created via fork/exec and inherited SunTools footprint, quickly filling up swap space.

plinth
  • 48,267
  • 11
  • 78
  • 120
2

This was already discussed ad nauseam. Bottom line is that a memory leak is a bug and must be fixed. If a third party library leaks memory, it makes one wonder what else is wrong with it, no? If you were building a car, would you use an engine that is occasionally leaking oil? After all, somebody else made the engine, so it's not your fault and you can't fix it, right?

Community
  • 1
  • 1
Dima
  • 38,860
  • 14
  • 75
  • 115
  • But if you owned a car with an engine that occasionally leaks oil, do you spend money to fix it, or do you keep an eye on the oil levels and top it up from time to time. The answer depends on all kinds of factors. – slim Nov 27 '08 at 16:33
  • This is not about owning a car. This is about building a car. If you get a third-party library with memory leaks and you absolutely have to use it, then you live with it. But if you are the one writing a system or a library, it is your responsibility to make sure it is bug-free. – Dima Nov 30 '08 at 03:47
  • +1 treat it like any other bug. (That doesn't mean "fix instantly" in my book, but "needs to befixed" for sure) – peterchen Dec 21 '09 at 08:30
2

Generally a memory leak in a stand alone application is not fatal, as it gets cleaned up when the program exits.

What do you do for Server programs that are designed so they don't exit?

If you are the kind of programmer that does not design and implement code where the resources are allocated and released correctly, then I don't want anything to do with you or your code. If you don't care to clean up your leaked memory, what about your locks? Do you leave them hanging out there too? Do you leave little turds of temporary files laying around in various directories?

Leak that memory and let the program clean it up? No. Absolutely not. It's a bad habit, that leads to bugs, bugs, and more bugs.

Clean up after yourself. Yo momma don't work here no more.

EvilTeach
  • 28,120
  • 21
  • 85
  • 141
  • I have worked on server programs that deliberately use processes rather than threads, so that memory leaks and segmentation faults cause limited damage. – slim Nov 27 '08 at 16:35
  • Interesting approach. I would be a bit concerned about processes that fail to exit and continue to gobble up memory. – EvilTeach Dec 04 '08 at 21:02
2

As a general rule, if you've got memory leaks that you feel you can't avoid, then you need to think harder about object ownership.

But to your question, my answer in a nutshell is In production code, yes. During development, no. This might seem backwards, but here's my reasoning:

In the situation you describe, where the memory is held until the end of the program, it's perfectly okay to not release it. Once your process exits, the OS will clean up anyway. In fact, it might make the user's experience better: In a game I've worked on, the programmers thought it would be cleaner to free all the memory before exiting, causing the shutdown of the program to take up to half a minute! A quick change that just called exit() instead made the process disappear immediately, and put the user back to the desktop where he wanted to be.

However, you're right about the debugging tools: They'll throw a fit, and all the false positives might make finding your real memory leaks a pain. And because of that, always write debugging code that frees the memory, and disable it when you ship.

Enno
  • 1,736
  • 17
  • 32
2

Yes a memory leak can be the lesser of two evils. Whilst correctness is important, the performance, or the stability of the system can be impacted when performing full memory release, and the risks and time spent freeing memory and destroying objects may be less desirable than just exiting a process.

In general, it is not usually acceptable to leave memory around. It is difficult to understand all of the scopes which your code will run in, and in some cases, it can result in the leak becoming catastrophic.

What if you allocate some memory and use it until the very last line of code in your application (for example, a global object's destructor)?

In this case, your code may be ported within a larger project. That may mean the lifetime of your object is too long (it lasts for the whole of the program, not just the instance where it is needed), or that if the global is created and destroyed, it would leak.

is it OK to trust the OS to free your memory for you when your application terminates

When a short lived program creates large C++ collections (e.g. std::map), there are at least 2 allocations per object. Iterating through this collection to destroy the objects takes real time for the CPU, and leaving the object to leak and be tidied up by the OS, has performance advantages. The counter, is there are some resources which are not tidied by the OS (e.g. shared memory), and not destroying all the objects in your code opens the risk that some held onto these non-freed resources.

What if a third party library forced this situation on you?

Firstly I would raise a bug for a close function which freed the resources. The question on whether it is acceptable, is based on whether the advantages the library offers (cost, performance, reliability) is better than doing it with some other library, or writing it yourself.

In general, unless the library may be re-initialized, I would probably not be concerned.

acceptable times to have a reported leak memory.

  1. A service during shutdown. Here there is a trade-off between time performance and correctness.
  2. A broken object which can't be destroyed. I have been able to detect a failed object (e.g. due to exception being caught), and when I try and destroy the object the result is a hang (held lock).
  3. Memory checker mis-reported.

A service during shutdown

If the operating system is about to be turned off, all resources will be tidied up. The advantage of not performing normal process shutdown, is the user gets a snappier performance when turning off.

A broken object

Within my past, we found an object (and raised a defect for that team), that if they crashed at certain points, they became broken, that all of the subsequent functions in that object would result in a hang.

Whilst it is poor practice to ignore the memory leak, it was more productive to shutdown our process, leaking the object and its memory, than to result in a hang.

leak checker mis-reporting

Some of the leak checkers work by instrumenting objects, and behaving in the same way as globals. They can sometimes miss that another global object has a valid destructor, called after they finish which would release the memory.

mksteve
  • 12,614
  • 3
  • 28
  • 50
  • 1
    What appears to be neglected in the answers here is that initialization is hard _and so is clean-up_. Certainly it may be warranted as future-proofing but there is a cost. An in-progress shutdown creates new intermediate states requiring careful handling, otherwise inducing races and other bugs. Consider deinitializating an application split into a UI and worker thread, and needing handling for the other end no longer being there. In my experience of bare-metal embedded programming no-one bothers to shut down peripherals and release memory during power-off except as required for correctness. – doynax Aug 05 '17 at 11:35
  • In other words deciding not to clean up after your self where deemed unnecessary may not be a sign of laziness so much as considered engineering trade-off. – doynax Aug 05 '17 at 11:38
2

Some great answers here. To add another perspective to this question, I'll address a case where memory leak is not only acceptable but desirable: in Windows drivers environment the developer provides a set of callbacks that are being run by the OS whenever required. One of the callbacks is a 'Shutdown' callback, which runs prior to system being shut off/restarted. Unlike standard situations, not only memory release is not necessary (system will be off in a moment), it's even discouraged - to make the shutdown as fast as possible and prevent the overhead of memory management.

SomeWittyUsername
  • 18,025
  • 3
  • 42
  • 85
1

I totally agree with JohnMcG, and just want to add that I have myself had problems to discover real, potentially serious memory leaks in time, just because it have been accepted to have the benign ones. When these have grown to be so many over time, it becomes more and more difficult to detect the serious ones in the flood of the benign ones.

So at least for your fellow programmers sake (and also for yourself in the future), please try to eleminate them as soon as possible.

1

It looks like your definition of "memory leak" is "memory that I don't clean up myself." All modern OSes will free it on program exit. However, since this is a C++ question, you can simply wrap the memory in question inside an appropriate std::auto_ptr which will call delete when it goes out of scope.

Max Lybbert
  • 19,717
  • 4
  • 46
  • 69
1

I took one class in high school on C and the teacher said always make sure to free when you malloc.

But when I took another course college the Professor said it was ok not to free for small programs that only run for a second. So I suppose it doesn't hurt your program, but it is good practice to free for strong, healthy code.

azn_person
  • 183
  • 1
  • 1
  • 6
1

It really depends upon the usage of the object that creating the memory leak. If you are creating the object so many times in the life time of the application that is using the object, then it is bad to use that way. Because so much memory leak will be there. On the other hand if we have a single instance of object without consuming the memory and leaking only in terms of small amount then it is not a problem.

Memory leak is a problem when the leak increases when the application is running.

Vinay
  • 4,743
  • 7
  • 33
  • 43
1

When an application shuts down, it can be argued that it is best to not free memory.

In theory, the OS should release the resources used by the application, but there is always some resources that are exceptions to this rule. So beware.

The good with just exiting the application:

  1. The OS gets one chunk to free instead of many many small chunks. This means shutdown is much much faster. Especially on Windows with it's slow memory management.

The bad with just exiting is actually two points:

  1. It is easy to forget to release resources that the OS does not track or that the OS might wait a bit with releasing. One example is TCP sockets.
  2. Memory tracking software will report everything not freed at exit as leaks.

Because of this, you might want to have two modes of shutdown, one quick and dirty for end users and one slow and thorough for developers. Just make sure to test both :)

Jørn Jensen
  • 998
  • 1
  • 10
  • 17
0

Splitting hairs perhaps: what if your app is running on UNIX and can become a zombie? In this case the memory does not get reclaimed by the OS. So I say you really should de-allocate the memory before the program exits.

Eric M
  • 1,027
  • 2
  • 8
  • 21
0

Its perfectly acceptable to omit freeing memory on the last line of the program since freeing it would have no effect on anything since the program never needs memory again.

henle
  • 473
  • 1
  • 4
  • 16
0

I believe it is okay if you have a program that will run for a matter of seconds and then quit and it is just for personal use. Any memory leaks will be cleaned up as soon as your program ends.

The problem comes when you have a program that runs for along time and users rely on it. Also it is bad coding habit to let memory leaks exist in your program especially for work if they may turn that code into something else someday.

All in all its better to remove memory leaks.

PJT
  • 3,439
  • 5
  • 29
  • 40
0

As long as your memory utilization doesn't increase over time, it depends. If you're doing lots of complex synchronization in server software, say starting background threads that block on system calls, doing clean shutdown may be too complex to justify. In this situation the alternatives may be:

  1. Your library that doesn't clean up its memory until the process exits.
  2. You write an extra 500 lines of code and add another mutex and condition variable to your class so that it can shut down cleanly from your tests – but this code is never used in production, where the server only terminates by crashing.
sharptooth
  • 167,383
  • 100
  • 513
  • 979
Jonathan
  • 2,132
  • 1
  • 11
  • 8
0

Some time ago I would have said yes, that it was sometime acceptable to let some memory leaks in your program (it is still on rapid prototyping) but having made now 5 or 6 times the experience that tracking even the least leak revealed some really severe functional errors. Letting a leak in a program happens when the life cycle of a data entity is not really known, showing a crass lack of analysis. So in conclusion, it is always a good idea to know what happens in a program.

Patrick Schlüter
  • 11,394
  • 1
  • 43
  • 48
0

Think of the case that the application is later used from another, with the possibilities to open several of them in separate windows or after each other to do something. If it is not run a a process, but as a library, then the calling program leak memory because you thought you cold skip the memory cleanup.

Use some sort of smart pointer that does it for you automatically (e.g. scoped_ptr from Boost libs)

Marius K
  • 498
  • 5
  • 6
0

Only in one instance: The program is going to shoot itself due to an unrecoverable error.

Steve Lacey
  • 813
  • 8
  • 11
0

The best practice is to always free what you allocate, especially if writing something that is designed to run during the entire uptime of a system, even when cleaning up prior to exiting.

Its a very simple rule .. programming with the intention of having no leaks makes new leaks easy to spot. Would you sell someone a car that you made knowing that it sputtered gas on the ground ever time it was turned off? :)

A few if () free() calls in a cleanup function are cheap, why not use them?

Tim Post
  • 33,371
  • 15
  • 110
  • 174
0

The rule is simple: if you finished using some memory clean it. and sometimes even if we need some instances later but we remark that we use memory heavily, so it can impact preformance due to swap to disk, we can store data to files in disk and after reload them, sometimes this technique optimize a lot your program.

mike
  • 81
  • 1
  • 3
0

If you are using it up until the tail of your main(), it is simply not a leak (assuming a protected memory system, of course!).

In fact, freeing objects at process shutdown is the absolute worst thing you could do... the OS has to page back in every page you have ever created. Close file handles, database connections, sure, but freeing memory is just dumb.

Simon Buchan
  • 12,707
  • 2
  • 48
  • 55
0

If your code has any memory leaks, even known "acceptable" leaks, then you will have an annoying time using any memory leak tools to find your "real" leaks. Just like leaving "acceptable" compiler warnings makes finding new, "real" warnings more difficult.

Chris Peterson
  • 2,377
  • 1
  • 21
  • 24
0

No, they are not O.K., but I've implemented a few allocators, memory dumpers, and leak detectors, and have found that as a pragmatic matter it's convenient to allow one to mark such an allocation as "Not a Leak as far as the Leak Report is concerned"...

This helps make the leak report more useful... and not crowded with "dynamic allocation at static scope not free'd by program exit"

reechard
  • 861
  • 6
  • 22