1

I have found the following references to C and C++ standards in StackOverflow (Memory Allocation/Deallocation?), in relationship to memory deallocation:

C++ Language:

"If the argument given to a deallocation function in the standard library is a pointer that is not the null pointer value (4.10), the deallocation function shall deallocate the storage referenced by the pointer, rendering invalid all pointers referring to any part of the deallocated storage". [Bold is mine].

C Language:

The free function causes the space pointed to by ptr to be deallocated, that is, made available for further allocation. [Bold is mine].

So, let's suppose a scenario like the following one:

You have a linked list, in a demo app. After creating and linking your nodes, searching, sorting, and so forth, your app is finishing successfully, with a beautiful "return 0".

Which is the problem if you have not deallocated any node since all the pointers you have created have already been destroyed?

Please, I want to clearly distinguish between:

  • what is really needed ("If you do not deallocate you have a leak of memory because of....);

  • what is a good practice, but not strictly required.

Finally: intentionally, I have avoided mentioning smart pointers. Because, if your answer is "deallocating it is a good practice ( = not strictly required, no memory leak), because in a real life scenario you will need to deallocate, etc.", then I can conclude:

  • If I am developing a demo app, I do not need either to use a smart pointer (if I am in C++).

  • If I am in C, I do not need to deallocate, because while arriving at app end of scope, every pointer will be deleted.

Exception: if in my linked list, I have a function to delete nodes, then I understand I need to deallocate, because of memory leak.

Any advice, correction, clarification, distinction from your side will be very much appreciated!


Edit: Thanks to all for your quick answers. Specially @Pablo Esteban Camacho.

Karol Baum
  • 63
  • 7
  • Possible duplicate of [What is a memory leak?](https://stackoverflow.com/questions/3373854/what-is-a-memory-leak) –  Aug 07 '17 at 05:30
  • 1
    You *have* to deallocate your memory. Otherwise the memory consumption of your program will continually increase until your system is out of memory. This is not only bad practice but even exposes your program as a potential vulnerability for an attacker. – Henri Menke Aug 07 '17 at 05:30
  • 1
    @HenriMenke not necessarily true. If the allocation is made at startup and never repeated, the memory consumption will not continually increase. I think I see now that this question is too broad. There are too many possible scenarios and too many considerations for a comprehensive answer:( – Martin James Aug 07 '17 at 05:52
  • @HenriMenke: "Otherwise the memory consumption of your program will continually increase until your system is out of memory". Could you please clarify this assessment? How a program which has ended would be able to "continually increase" and consuming memory? – Karol Baum Aug 07 '17 at 05:58
  • @MartinJames: I wrote: "your app is finishing successfully, with a beautiful "return 0" ". So, **"Do I have a memory leak because of not deallocating?"**. Where do you see the question being broad? – Karol Baum Aug 07 '17 at 06:04
  • 1
    @KarolBaum You cannot say whether your program will ever terminate. See the [Halting Problem](https://en.wikipedia.org/wiki/Halting_problem). Also, saying your program will terminate eventually and let the OS clean up your mess is a very poor excuse. First, you would need to make sure that you can *never* run out of memory before your program terminates and second, you are degrading performance because the OS will have to walk the whole memory to detect where those pointers you didn't clean up actually point (this is called Garbage Collection). – Henri Menke Aug 07 '17 at 06:24
  • @KarolBaum what is a memory leak? Is it memory that is forever lost to your computer? Is it a valgrind 'defintely lost' report? Is it memory you have allocated in your app, but neglected or decided does not need to be freed before process termination? – Martin James Aug 07 '17 at 06:26
  • @HenriMenke 'the OS will have to walk the whole memory to detect where those pointers you didn't clean up actually point' - in general, ,no. It doesn't know about your malloc/new pointers. It does not care. The OS is only concerned with pages allocated to your process and, if not explicitly shared, will just splat them all, irrespective of content. – Martin James Aug 07 '17 at 06:30
  • @HenriMenke "You cannot say whether your program will ever terminate. See the Halting Problem." That's entirely wrong. – H Walters Aug 07 '17 at 06:31
  • @HWalters So you can say in advance whether your machine might decide to suspend your program? I wish I could see the future as well. – Henri Menke Aug 07 '17 at 06:34
  • ..and there's memory that you have allocated that you explicitly need to NOT be deallocated before the OS terminates your process. Memory in libraries, maybe associated with global management, with locks. Memory in object pools that may be in use, at any time by one of several threads, one or more that you cannot terminate with user code. This is why I suspect that this question is too broad. I have not close/downvoted it, but have asked on SOCVR for advice. It may be a good question, but I'm not sure, so I've left it:) – Martin James Aug 07 '17 at 06:44
  • (1) If you aren't is a hosted environment, there is no OS to clean up after you. (2) Memory is just one form of resource. Not all resources are cleaned up by the OS neatly for you. Your users will not appreciate you making a mess of their system. So don't get into this lazy habit. – StoryTeller - Unslander Monica Aug 07 '17 at 07:30
  • @StoryTeller it's not a 'lazy habit' when it's intentional and essential. The problem with broad declarations like 'you MUST always explicitly free all memory before process temination' is that there are some circumstances when you must not. I don't deny that memory/instances/whatever shoud be freed if you are sure that you no longer need them and you're sure that nothing is still using them, but it's not always possible to be that sure. – Martin James Aug 07 '17 at 07:59
  • @MartinJames - I never made a "you must" claim, so you can straw-man someone else. I presented the broader picture of general resource management, with a caution not to give in to the "Oh, the OS will clean it" mentality. – StoryTeller - Unslander Monica Aug 07 '17 at 08:01
  • @StoryTeller I never claimed that you did. It is, however, a claim that is often made, eg 'You have to deallocate your memory', (in this question!). The OP made no mention of resources other than memory, so 'general resource management' is off-topic. – Martin James Aug 07 '17 at 08:07
  • @MartinJames - Says who? The commenting section is to comment on the issue in the post, which I perceive as larger. Your opinion has no more validity than mine. You have no business silencing anyone because of some slippery slope their comment supposedly introduces. – StoryTeller - Unslander Monica Aug 07 '17 at 08:10
  • 'Says who' well, me, I guess... Also, I cannot silence anyone on SO, nor do I wish to. – Martin James Aug 07 '17 at 08:17
  • @HenriMenke If I write a program that counts out integers, stopping when it finds a counterexample to the Goldbach conjecture, I personally cannot tell you if it would stop. If we could build a machine that could solve the HP in the general case, however, I could just feed this program to that machine; if it says it halts, the GC is false. If not, it's true. Turing proved _this_ is impossible. So we can say goodbye to such powerful theorem proving approaches. OTOH, I can write a program that counts to 10^16 and stops, and tell that it will halt. ... – H Walters Aug 07 '17 at 08:19
  • @MartinJames - You started by straw-manning my comment, then complaining it's a slippery slope and off-topic anyway, because... reasons. It's an attempt to silence in any reasonable debate. You didn't actually respond to what I said other than have a knee-jerk reaction. – StoryTeller - Unslander Monica Aug 07 '17 at 08:20
  • OK. 'If you aren't in a hosted environment, there is no OS to clean up after you' - the OP clearly mentioned 'return 0', ie hosted environment. 'Memory is just one form of resource. Not all resources are cleaned up by the OS neatly for you' - I don't deny that, but it's not relevant to the OP's question which is explicitly about memory. The implication that failing to explicitly free memory is a lazy habit is a generalization that often fails. The very nature of some designs mandates that some allocations must not be freed except by the OS at termination. – Martin James Aug 07 '17 at 08:31
  • @HenriMenke You can't use the HP (or Turing's proof to be more exact) to claim that we can't tell if particular programs will halt or not. What Turing's proof shows is that, assuming we're as powerful as TM's, there _are programs that_ we can't tell halt or not; it doesn't show that _we cannot tell if any_ programs halt. – H Walters Aug 07 '17 at 08:32
  • @MartinJames - (1) [`return 0;` is by itself no indication of any environment](https://timsong-cpp.github.io/cppwp/n4659/basic.start.main#:argc). (2) Deny it or not, you cannot attest to it's relevance. The OP mentioned they have a linked list. Can you say for sure the list doesn't contain handles to other resources that require releasing? I think not. (3) Those designs are rare and few in-between to merit a general outlook on the subject. And many of those the C++ run-time tries to help you with in a structured manner (even in a free-standing environment). – StoryTeller - Unslander Monica Aug 07 '17 at 08:57
  • @HWalters True but most programs are designed to stop, or contains an infinite loop whose code is intended to be safe relative to resources. Not cleaning because you have some scientific simulation for a given conjecture or such is a very special case that is not to be taken in account. To much apps nehave so badly just because they rely on bad assumptions (OS will do it for me, this run condition will never happen, etc). – Jean-Baptiste Yunès Aug 07 '17 at 08:58

7 Answers7

2

This is a topic where two answers are required because C and C++ follow completely different philosophies when it comes to resouce management.

C

When using malloc/free in C the only affected resource is memory. That leads to what other answers already brought up: You may be tempted to not free memory at the end of the program because the OS will reclaim all the process’ memory anyway. Since I don’t program in C I can’t say if and when that may be justified.

C++

C++ is different. There is no excuse for not destroying your objects. C++ ties acquiring and releasing memory to general initialization and cleanup. When you create an object its constructor runs, and when you destroy it its destructor runs. That’s true both for stack-allocated objects as well a free-store allocated ones (new and delete). If you don’t delete then the destructor does not run as well, which means essential actions like closing database or network connections, flushing files to disk, etc. may not happen.

In C++ never think of “memory management”, always think of “resource management”. Memory is just one among many types of resources.

Then again, in the C++ universe the whole question feels a bit strange. It shouldn’t even come up because if you follow best practices you use C++’s automatic resource management: either by creating objects on the stack directly or by using resource management wrappers[1]. If you catch yourself writing a naked new – and hopefully a corresponding delete – you should have as solid a justification for it as when writing a goto.

[1] The smart pointers std::unique_ptr and std::shared_ptr are the obvious resource managers. But there are many, for example std::vector. Granted, it does a lot more, but one of its jobs is taking care of the piece of heap memory where the vector’s items are stored.

besc
  • 2,507
  • 13
  • 10
2

Other than what has already been properly answered, there are few points I would like to add for better clarification.

  1. I would like to refer you to our C++ Standard Website (https://isocpp.org), where you will be able to find the most authorized answers. Once you get familiar with our most important authors, you will feel yourself more confident to trust on the received answers.

  2. That said, I would like to invite you to read carefully the C++ Core Guidelines, a document which has been announced by Dr. Bjarne Stroustrup (C++ creator) in 2015, and which receives permanently updates, principally both from Dr. Bjarne Stroustrup and Herb Sutter ("a prominent C++ expert" and, currently, the head of C++ ISO Committee): http://isocpp.github.io/CppCoreGuidelines/CppCoreGuidelines. Other important contributors provide the result of their researches as well. You will find that the document as been updated few days back (July 31st 2017).

  3. Particularly, going to your questions, I have found you turned back several times over one of them whose answer has been missed: "what about smart pointers?". In the mentioned document, you will find that smart pointers perform a customized and limited garbage collection which effectively release resources. According to your questions, I would suggest to deeply review:

    a) http://isocpp.github.io/CppCoreGuidelines/CppCoreGuidelines#Rr-raii (this C++ rule, RAII, is a key point to deeper understanding C++ philosophy, and will provide "light" to your mind).

    b) http://isocpp.github.io/CppCoreGuidelines/CppCoreGuidelines#S-resource

    c) http://isocpp.github.io/CppCoreGuidelines/CppCoreGuidelines#Rr-mallocfree

    d) http://isocpp.github.io/CppCoreGuidelines/CppCoreGuidelines#S-cpl

  4. Finally, in case you want to go beyond (to the discussions), I would recommend you to google about "smart pointers are not so smart" (quoting by memory), written by Dr. Scott Meyers, other of our most important writers. [Note: my quotation comes from a Dr. Meyers' book, it is not the title of an article].

Hope this helps.

  • +1 for a very good list of resources. May I suggest to remove the “garbage collection” term? C++ smart pointers produce no garbage. In contrast to Java/.NET-style garbage collection objects are immediately destroyed once their owner(s) is(are) done with them. In P.8 the Core Guidelines even state: “[RAII] eliminates the need for “garbage collection” (by generating no garbage)”. – besc Aug 08 '17 at 17:08
  • @besc: I should have introduced a distinction. Dr. BS uses the expression "gc" exactly as I mentioned: *customized and limited*, pointing to the "way" that smart pointers (sp) work. So: after RAII, at some point of your program, *x* resource is not more useful; then, it turns into *garbage*. And it will remain as such till the sp arrives to the end of scope. Thus: this is the sense in which Dr. BS talks about "gc" (there is a video, in MS channel 9; I guess it should be GoingNative 12, but I was not able to find the segment where Dr. BS mentioned this. I will look for it and provide the link). – Pablo Esteban Camacho Aug 08 '17 at 18:12
  • @Pablo Esteban Camacho Oh, yes, I see. That’s probably why that phrase sounded familiar from the beginning. I guess I disagree with Stroustrup here. But the real point why I dislike the garbage term is because it’s incredibly loaded. I at least know nobody who doesn’t immediately and intuitively think of Java and/or .NET when hearing the term “garbage collection” – unreferenced objects that get identified and killed in a regular sweep. That’s quite different from what happens in C++. – besc Aug 08 '17 at 20:31
  • @besc: Let me use "customized" terminology, ok?. Do not confuse "gc *action*" with "gc *process*". The *action* is what C++ does: just picking up this little garbage which has been dropped. The *process* is what Java/.NET does: everything is stopped to leave the bulldozer free to clean resources. Actually, Dr. BS puts an example: he was working on Java, with a Sun Microsystems machine, 64 micros; at some point, one micro was performing gc. All the other 63 were waiting...! (I promise I will post that link). – Pablo Esteban Camacho Aug 09 '17 at 00:44
  • *Do not confuse "gc action" with "gc process".* That’s exactly what I’m worried about people will do when hearing the “garbage collection” term. :) – besc Aug 09 '17 at 17:20
1

Clearing the memory before your program exists is a good practice but it is not strictly required since all the memory is freed once a regular program ends. At least this is the case for current operating systems.

But, programs tends to evolve in time. So, maybe 6 months later, you decided to use "that already existing linked list implementation" in another project. Or maybe you will use it in a shared DLL which will stay loaded in the memory as long as the OS is running. Or maybe you extend your demo and it will run for a while and you are limited on memory.

There are many possibilities that something that is not recommended but "works" today goes haywire tomorrow. Best practices are recommended for a reason.

But to be clear, you are not required to take care of freeing your pointers in 1 shot applications.

xycf7
  • 913
  • 5
  • 10
  • 2
    »since all the memory is freed once a regular program ends« What if it never ends because it's a daemon? This answer is bad and misleading: -1 – Henri Menke Aug 07 '17 at 05:31
  • 2
    That is why I used the term "Regular program". Daemons are not regular programs, they are, well, daemons. And I clearly stated what happens when it is not a regular program in the 2nd paragraph. – xycf7 Aug 07 '17 at 05:43
  • 1
    @Henri Menke: I do not understand your point: why you are taking off one point from xycf7 because not considering the scenario of the daemon, while it has not been included in the question? When you answer a question, you are not supposed to consider ALL the scenarios that the requester has not included. Actually, you might restrict your answer to what has been asked. Even more, xycf7 has implicitly mentioned your "daemon example". Sorry, but I consider your attitude unacceptable. – Karol Baum Aug 07 '17 at 05:43
  • 1
    @xycf7: thanks very much for your fast and clear answer! – Karol Baum Aug 07 '17 at 05:45
  • I think this is a clear and proper answer to the question, and I don't understand the downvotes. I'm voting it up. – SBS Aug 07 '17 at 07:30
1

If you're running on an operating system that frees memory for the program on exit (which would be pretty much everything), I would say that freeing memory before exit is not optimal but might be a good practice.

It might be a good practice because things change over time and you might need to have the ability to not exit and properly free things. So from good engineering point of view you might want to free memory.

It is not optimal because the operating system is magnitudes better at freeing your memory in bulk than your program. You walking a linked list and freeing one element at a time will bring each of those elements into cache/tlb just to throw them away and in the worst case you might even need to swap them in. Two decades ago I saw research showing that common implementations of in-line malloc boundary tags could make the process of manually freeing memory on exit 5-6 magnitudes slower in real applications (this was with swap which might be much less common today; also I don't remember the actual number, this is a conservative guess, the actual number could have been much slower, it was minutes vs. milliseconds). Furthermore, with most malloc implementations freeing doesn't do anything from the point of view of the operating system anyway, the operating system still has to go through all effort to actually properly free the memory.

Art
  • 19,807
  • 1
  • 34
  • 60
  • Yeah - there is this issue that, with most environments, there are two memory-managers at work. The OS memory-management layer that does not care about your linked-lists, structs etc, keeps references to all memory used by your process and, at process terminaton, can easily splat all memory that is not explicitly shared with another process. That MM is well-tested, hugely capable, unlikely to fail and you don't have to write/test/debug it. Then there's the C or C++ or whatever memory sub-allocator that is process-specific, faster and manages your ll objects etc. These two get confused :( – Martin James Aug 07 '17 at 06:05
  • 2
    **This is nonsense.** There is literally no excuse for not freeing your memory. The moment you do not free you have to pray that you do not run out of memory before you're done. – Henri Menke Aug 07 '17 at 06:27
  • Well, I upvoted the 'nonsense'. 'There is literally no excuse for not freeing your memory' - I agree, which is why the OS memory management layer is tested thoroughly. There are certainly 'excuses' for not explicitly freeing memory in user code, some of which Art has explained. – Martin James Aug 07 '17 at 06:50
  • 1
    @HenriMenke "literally no excuse"? I'd say that an operation being several magnitudes faster is a pretty decent excuse to take into consideration regardless of whatever software engineering counter-argument we can make. – Art Aug 07 '17 at 06:51
  • @Art: I am totally surprised by your answer.... Are you sure this OS MM is a current implementation in UNIX (macOS, Solaris...), and Linux or is it just (trying to guess...) something implemented in Windows? You say: "It might be a good practice because things change over time and you might need to have the ability to not exit and properly free things.", and I totally agree with you. But, if we arrive to this "ideal" scenario ("you cannot exit without properly freeing"), what would be the purpose of OS MM (of course, just in relationship with your "safe" C or C++ app? (continuing...) – Karol Baum Aug 07 '17 at 06:52
  • @Art: I am also more surprised by "the manual freeing make real apps to run slower" and "most malloc freeing doesn't do anything". Why don't they do anything? May be the OS Memory Manager is blocking them...? What about smart pointers?? Are they really able to free memory or not...?? – Karol Baum Aug 07 '17 at 06:53
  • @MartinJames: Appreciate your confirmation of OS MM "job". It stills remains not clear for me why free / free[], delete / delete[] do actually nothing... Is OS MM avoiding their work? Also for smart pointers....? – Karol Baum Aug 07 '17 at 06:56
  • @KarolBaum I'm not aware of any current operating system that doesn't free memory on process exit. They could exist, especially some very bare-bones embedded thing, but I'm not aware of any. – Art Aug 07 '17 at 06:58
  • @KarolBaum When it comes to malloc not actually freeing anything to the system, you need to understand the different layers. The kernel is handing out memory in large chunks, malloc then splits those chunks and manages smaller allocations within them. For efficiency reasons most malloc implementations will not give back those chunks back to the kernel even after everything has been freed in them because it's assuming that they'll be needed again soon. – Art Aug 07 '17 at 07:02
  • What is clear for me, so far, is that we have 2 worlds here: the C or C++ one, with its good practices, to avoid memory leak, to improve performance, so forth, and the OS world: for which a C or C++ effort of freeing memory means having slower apps. This is the real **nonsense**! I will appreciate if someone can provide a "raw" perspective of what is really happening with smart pointers.... – Karol Baum Aug 07 '17 at 07:08
  • The reliance on a modern OS that releases memory on program termination is a huge assumption. Like it or not, there are still real-world embedded systems where applications run on bare metal (no OS), bugs in operating systems, programs that lock resources in ways that prevent them being released by the OS, programs that must run 365/24/7 which means memory can't be released on program termination, etc etc. – Peter Aug 07 '17 at 07:08
  • @Art: Your wrote: "For efficiency reasons most malloc implementations will not give back those chunks back to the kernel even after everything has been freed in them because it's assuming that they'll be needed again soon". I appreciate your clarification! – Karol Baum Aug 07 '17 at 07:10
  • @KarolBaum In the scenario described by Art, the free/delete DO do something, but it's a pointless duplication of effort. delete/dispose/free/whatever free your referenced memory to the C/C++/whatever sub-allocator as supplied by your language runtime. This takes effort, time, code, (that you have to write and test and debug and maintain) and at the end of all that, the OS memory-manager deallocates the entire sub-allocator heap in the same way it deallocates the process data, stacks, code etc. So, it can be said that, under those circumstances, that the free etc 'do nothing':) – Martin James Aug 07 '17 at 07:12
  • @Peter 'still real-world embedded systems where applications run on bare metal' yes, there are, but those systems don't load processes and handle any 'return 0'. It's clear that the OP is describing a 'real' OS, such as Linux or Windows. – Martin James Aug 07 '17 at 07:16
  • @Martin: well... Tonight I have learned a lot of things from you guys! Thanks very much! – Karol Baum Aug 07 '17 at 07:16
1
  • what is really needed ("If you do not deallocate you have a leak of memory because of....);

You have to deallocate every ressource you no longer need, even in the middle of a run. You sometimes need temporary dynamic allocated memory, then deallocate it as soon as your logic says it will not be used in the future.

  • what is a good practice, but not strictly required.

Good practice, is what I said: "always deallocate what you no longer need". You can sometimes defer the deallocation for some good reasons (for example it may be more important to finish some other tasks than to deallocate memory at a given instant). On most OS all memory used by a process is automatically released, but this is not a requirement!

  • If I am developing a demo app, I do not need either to use a smart pointer (if I am in C++).

On the contrary, always prefer using smart pointers, because if you use them correctly then deallocation will take place at good places!

  • If I am in C, I do not need to deallocate, because while arriving at app end of scope, every pointer will be deleted.

No, that is not a good practice, deallocate as soon as possible.

Jean-Baptiste Yunès
  • 34,548
  • 4
  • 48
  • 69
  • Thanks for your answer. What about Art's and Martin's answers related to: a) "Freeing memory effort in C and C++ turns your app slower", and b) "OS Memory Manager turns your coding effort redundant"? Do you have any "C or C++ *official* documentation" on this? Please, look at the following: I am not trying to find the way to jump over what we are habituated to do; I am just surprised by the words "slower" and "redundant" and trying to arrive at the "nude truth" of these assessments. – Karol Baum Aug 07 '17 at 07:37
  • "slower": of course freeing has a price to pay, but resources are shared and rare so you need to be fair. There exists conditions under which you can "violate" those rules (I give you one), but for the sake of your code you need to behave correctly. "redundant": in the case of exiting without explicitly deallocating, many programmers behave such, but remember some OSes or special environments do not free for you! If deallocating is critical to you, then there exists different kind of strategies to optimize this but most of the time it doesn't worth the effort. – Jean-Baptiste Yunès Aug 07 '17 at 08:52
0

Please, I want to clearly distinguish between:

  • what is really needed ("If you do not deallocate you have a leak of memory because of....);
  • what is a good practice, but not strictly required.

What is "really needed" is that the total memory used by your program needs to remain within limits of what is available to your program. Continually allocating memory and never deallocating it means the amount of memory consumed by your program keeps increasing for as long as it is running. If the program uses more memory than is available to it, then subsequent allocations may well fail, and the program will probably not be able to function as intended (e.g. an algorithm that relies on being able to use a buffer cannot run correctly if the buffer cannot be allocated).

As a simple example the loop

  while (1)
  {
       do_something();
  }

may well fail in ugly ways if do_something() allocates memory and never releases it.

There is, however, nothing (short of work ethic, or caring about avoiding grumpiness of employers or customers) that absolutely forces a programmer to deallocate any dynamically allocated memory. There is, practically, no absolute need for dynamically allocated memory to be deallocated if;

  • The programmer simply does not care about the consequences (e.g. complaints by users) of a program running out of memory, and needing to be terminated or reset; OR
  • It is somehow known that, although the program dynamically allocates memory, it will never allocate more than needed; AND
  • It is known that the operating system will clean up properly as a program terminates.

However, for programmers who care about their users, or are using dynamic memory allocation because they DO NOT KNOW how much memory their program needs, or are targeting an OS that does not properly clean up after programs terminated, then deallocating memory is certainly advisable.

Good practice is normally to systematically ensure that every dynamic memory allocation is subsequently followed by exactly one explicit deallocation of that memory. Doing so, avoids all the potential problems associated with not deallocating memory.

Peter
  • 35,646
  • 4
  • 32
  • 74
0

Adding to what has already been written in other answers, I would mention some more reasons to free all objects allocated during the program execution before exiting to the OS:

  • If you run the program under a memory checker such as valgrind or purify, it will tell you if indeed all objects have been freed. Any objects still allocated may indicate memory leaks in the program: objects that internal routines have lost track of and forgot to free in due time. Such memory leaks can lead to program failures if they happen in repetitive tasks and cause the memory allocator to run out of space.

  • If the allocated objects have been corrupted, trying to free them all may cause undefined behavior, hopefully segmentation failures, which are extra chances to identify and correct bugs.

This process may be costly and is not necessary for most environments, so one can make it optional, via a command line argument or an environment variable so as to use it in beta and debugging sessions and skip it in production.

Note however that some complex data-structures may be impossible to free without disproportionate efforts or extra space overhead. For short-lived executables running under any modern OS, this is not a real problem.

chqrlie
  • 131,814
  • 10
  • 121
  • 189