0

I am continuing the work on my assignment involving hash tables in C++. I asked a question on here yesterday about the "proper way" to free memory since I had some issues, and valgrind was showing I leaked 17.000.000 bytes of memory.

I worked on the code all day today and I sort of fixed it I guess, and after running this command again

valgrind --leak-check=full --show-leak-kinds=all --track-origins=yes --num-callers=20 --log-file=valgrind.log ./main

Valgrind shows there are still some blocks that are reachable. After doing some research on Valgrind FAQ page and this I found out that the possibly lost section doesn't really matter that much, and it only refers to allocations that fit only the first definition of "memory leak". These blocks were not freed, but they could have been freed (if the programmer had wanted to) because the program still was keeping track of pointers to those memory blocks.

That made me think about 3 things.

Firstly, after reading the FAQ and the previous SO question I am assuming a real-life programmer "wouldn't care". I've heard about the military missile system leaking memory link and they never fixed it.

Secondly, this assignment is for my DataStructures class which "sort" of focuses on memory management (pointers for BST's, LL's,...). The professor is a quite strict grader but I am not sure if he runs valgrind for every submitted assignment. The reason why I am saying this is because for the love of god I can't find the "still reachable" blocks in my code. It would save me a lot of time and trouble if I could simply avoid that.

Thirdly, I am not familiar with many programming languages, but I've heard of garbage collectors (Java, JS, C#). Is that what they essentially do? (free unused memory) Is the feature of not having a garbage collector in C++ a good or a bad thing?

I guess my final question is: How much (on a scale from 1 - 10) should I care about the still reachable blocks if I understand correctly, it's technically not a memory leak it's just poor memory management.

Thank you

Valgrind report(It's a lot bigger than this, I just pasted the "important part"):

==350555== LEAK SUMMARY:
==350555==    definitely lost: 0 bytes in 0 blocks
==350555==    indirectly lost: 0 bytes in 0 blocks
==350555==      possibly lost: 0 bytes in 0 blocks
==350555==    still reachable: 20,919 bytes in 292 blocks
==350555==         suppressed: 0 bytes in 0 blocks
==350555== 
==350555== For lists of detected and suppressed errors, rerun with: -s
==350555== ERROR SUMMARY: 0 errors from 0 contexts (suppressed: 0 from 0)
  • Keep in mind that the standard library allocates memory before main starts and may not free it all - so you have to make an empty program (with includes for the standard library headers) and run Valgrind on it to see if it is you or the standard library or the runtime library. – Jerry Jeremiah May 11 '23 at 02:03
  • wow! I did not know that, you are right I am using around 16 header files and `110.723 bytes` were allocated. Only problem is that this time `valgrind` says `==474051== All heap blocks were freed -- no leaks are possible` which makes me belive it is me `:(` –  May 11 '23 at 02:07
  • You should be worried about only if you expect there to not be any reachable memory because you've gone to the trouble of freeing everything before exit. If you're relying on exit freeing it for you, then it is fine. If you don't know which you are doing then you have a problem. – Chris Dodd May 11 '23 at 02:16
  • I have a 2D vector representing my Hash Map (because of separate chaining) and it's filled with pointers to DataEntry objects. I am using `clear()` ,`shrink_to_fit()`, and `delete` in my HashMap destructor to handle memory. My original plan was to handle memory by myself, without using `shared_ptr`, `unique_ptr`,`weak_ptr`, etc. –  May 11 '23 at 02:22
  • In that case, you probably want to write your unit tests to explicitly destroy all of your Hash Map objects and ensure that there's no still reachable memory, other than what would be expected from the system library and perhaps the testing infrastructure (a well-designed testing infra will have a way of ensuring there's none.) – Chris Dodd May 11 '23 at 02:28

1 Answers1

0

It all dependes if the leaks are bounded in number or not (assuming that the size of the leaks isn't huge as well).

Usually one-off leaks for things like buffers are fairly harmless.

Leaks that occur every time that a function gets called are more serious. I've seen cases where applications fail due to leaks like this. In "normal" use the function only gets called a small number of times and the leak causes no problems. Then a customer does something weird and the function gets called millions of times and ends up running out of memory.

The other problem that I've seen with big applications that do not maintain a zero-leak policy (or at least zero suppressed leaks) is that you will end up with hundreds if not thousands of leaks. Then when the day comes that someone makes a change and adds a serious leak it can easily get lost in the noise.

Paul Floyd
  • 5,530
  • 5
  • 29
  • 43