4

I have a program that does the following

  1. Allocate memory (heap)
  2. Do some processing
  3. Allocate more memory (heap)
  4. Do some processing

It does so a few times then exit.

I don't really care about the memory footprint of the program, only the execution time.

Would it be a bad thing not to free memory on the account that it might actually takes more time to process the free/delete than just skip it and move to next step.

In the end, the program will exit and it won't matter anymore.

I understand that the only way to be sure is to do some benchmarks on my computer, but I'm interested in the theoretical pros and cons.

NB: let's assume a modern OS that will cleanup the memory at exit.

Antzi
  • 12,831
  • 7
  • 48
  • 74
  • 1
    possible duplicate of http://stackoverflow.com/questions/36584062/should-i-free-memory-before-exit – Kami Kaze Jan 26 '17 at 08:28
  • 4
    This is really impossible to answer, sometime `free()` will win time. – Stargateur Jan 26 '17 at 08:28
  • 1
    In this scenario you might be best of, operating on global scope buffers/structs. It depends on your use case and whether you actually know in advance what your maximum requirements will be. If you're using C++ you should be using RAII, so you either tagged that in error or you should really read up on what RAII is. – kamikaze Jan 26 '17 at 08:30
  • 2
    @Stargateur can you give an example for that? (out of curiosity) – Kami Kaze Jan 26 '17 at 08:31
  • @KamiKaze similar but not quite: I only care about performance, and not specifically about `exit(3)` behaviour. – Antzi Jan 26 '17 at 08:33
  • If you can predict a maximum amount of memory, you should profile your memory with static structs. – LPs Jan 26 '17 at 08:33
  • 3
    @KamiKaze - Imagine the call to `free` releases a chunk that allows the next call to `malloc` to find a sutiable block faster. The amortized time is thus better – StoryTeller - Unslander Monica Jan 26 '17 at 08:34
  • @Stargateur My question is about what scenarios would include `free()` saving time (except memory exhaustion) – Antzi Jan 26 '17 at 08:34
  • @Antzi I don't know if you tagged the correct one with your comment here, neither do I know if we have 2 distinct tags.... – Kami Kaze Jan 26 '17 at 08:34
  • "Should I free memory...". **Why not?** If you'd do so, you would avoid many memory related problems with two lines of code... – zx485 Jan 26 '17 at 08:38
  • 1
    @Antzi If you `free`, future mallocs will likely end up reusing the memory of previous mallocs. This can save time because that old memory will not have to be obtained from the OS and it will likely be in cache. Page faults and cache misses can slow things down quite a bit. – Petr Skocik Jan 26 '17 at 08:41
  • @PSkocik that should be an answer :) – Antzi Jan 26 '17 at 08:47
  • @Antzi You inspired me to this question:http://stackoverflow.com/q/41869662/4961259 You might want to read the answers. – Kami Kaze Jan 26 '17 at 10:20

5 Answers5

3

Whether not releasing allocated memory will be a performance win or loss depends on your allocation patterns and how you use the memory. If you free, future mallocs will likely end up reusing the memory of previous mallocs. This can save time because that old memory will not have to be obtained from the OS and it will likely be in cache. Page faults and cache misses can slow things down quite a bit.

If you care about this, benchmark a freeing and not-freeing variant of your program.

Petr Skocik
  • 58,047
  • 6
  • 95
  • 142
3

There are a number of potential problems. Examples include;

  • If you can't predict in advance how much memory is actually needed - which is among the most common reasons to use dynamic memory allocation - then your program may exhaust available memory (either due to exhausting system memory, or because the host operating system impose quotas on your program). After that, it may or may not run as fast as required, but (even ignoring timing concerns) will probably run incorrectly and produce incorrect results. It doesn't matter how much memory your host system has, or what quota the host system enforces for program it hosts - it is possible to exhaust that amount.
  • Not all operating systems release memory as a program exits. And, among those that do, there is potential that the memory is not fully released - both due to bugs in the OS itself, and due to actions by your program (e.g. allocating resources that are shared with other programs). In such cases, if your program is run several times, you may find that the program (when run for the 32nd time [to pick a random number] or so) will inexplicably fail.
  • As a program allocates more memory then, depending on how dynamic memory allocation is managed (e.g. data structures used by malloc()), allocations themselves can slow down if memory is not released. That can cause your program not to meet timing constraints as it allocates more memory. Releasing memory when no longer needed can alleviate such concerns (albeit with other effects, such as memory fragmentation).
  • If you get into the habit of not releasing dynamically allocated memory, you may well also (for similar reasons of "efficiency") not bother to check if allocations succeed - after all, that takes time too. And that causes problems should allocations ever fail (e.g. abnormal program terminations, trashing memory, producing wrong results without warning, etc).

The bottom line is that allocating memory and not deallocating it is a very poor (and lazy) strategy if you care at all about program performance or timing. If you really care about program performance/timing, you will not actually dynamically allocate memory at all.

If you are using dynamic memory allocation, then you are better off releasing it when no longer needed, even if you don't care about memory footprint. Depending on circumstances, you may find the program runs either faster or slower if you release memory properly (it depends on numerous variables, including those I'm mentioned above, and more). And, should you ever need to reuse your code in a larger program - which, practically, happens more often than not in the real world - you are more likely to run into problems (memory concerns, performance concerns) if your code does not release memory properly.

Peter
  • 35,646
  • 4
  • 32
  • 74
1

Where are you allocating the memory?, the stack or the heap? In your case I would advice you allocate your stuffs on the stack (i.e Not using any memory allocation functions).

However, most OS de-allocates your share of the heap when your program exits. If you are using the heap, and your are using the modern C++ coding, you want to readup on Smart pointers.

Creating stack object has the lowest performance hit, and they are scope based (which is even cooler). The only cons is very limited space.

One other technique is to pre-allocate a chunk of memory initially on the heap; and manage the allocated space for usage, this common in Emulators.

Gamma.X
  • 554
  • 4
  • 9
  • heap. Smart pointers are neat, but it's the same as using free/delete – Antzi Jan 26 '17 at 08:40
  • Nope, they are different. 1. They save u dev. times, 2. Delete only occurs when the stuffs is out of scope. – Gamma.X Jan 26 '17 at 08:44
  • More so, u should be careful with memory allocations. In the end, when u over allocated memory, your program will slow down, the entire system too. So that u don't 'kill' the initial optimization u aim to achieve with not using free methods, u should delete unused memory. – Gamma.X Jan 26 '17 at 08:47
0

This hardly depends on where you run the program, most modern OS will clear the memory allocated by a program on exit. So as long as you don't allocate big amounts, there should be no issue.

But if you work embedded this might not be true.

Edit: If the OS clears memory after execution then, the only issue might be that you allocate more than there is available. But as long as this doesn't happen I don't see an issue there. It is common practice for some programs nowadays.

Kami Kaze
  • 2,069
  • 15
  • 27
0

You should always free all allocated memory before program termination for several reasons.

  1. This is commonly considered best practice.
  2. It is unclear whether there is any benefit (in terms of execution speed) from letting the run-time clean up after you.
  3. This avoids memory leaks in case the run-time environment fails to collect all memory.
  4. Should you ever extend this program or build on it, the memory leak may spread into other code.

Furthermore, in C++ you should avoid the need for 'manual' memory de-allocation and rely on RAII methods, such as provided by the standard library via its containers and smart pointers.

Walter
  • 44,150
  • 20
  • 113
  • 196