2

My doubt is regarding the possibility that vector clear() function is not freeing the allocated memory when called. I know there must be a leak of some sort in my program connected to these vectors, due to the non-existence of increasing memory usage with time (the current version reached ~80% of memory usage after 2000000 iterations in the main loop) in a prior implementation in which the vectors didn't exist (the previous version barely reached 0.2% of memory usage). I did some research and found out that memory is not necessarily deallocated after clear is called and a possible trick to circumvent that is to use a swap function to do the work. For instance:

void clearvecint(vector<int> & vec){
    vec.clear();
    vector<int>().swap(vec);
}

However that didn't do the trick. I also tried making the vectors global variables, and applied the same technique. The leaks were still present. Finally, I resorted to valgrind (I used the -O0 flag), with the "--leak-check=full --show-leak-kinds=all" parameters and got the following final result:

==28738== LEAK SUMMARY:
==28738==    definitely lost: 0 bytes in 0 blocks
==28738==    indirectly lost: 0 bytes in 0 blocks
==28738==      possibly lost: 0 bytes in 0 blocks
==28738==    still reachable: 167,928 bytes in 590 blocks
==28738==         suppressed: 0 bytes in 0 blocks
==28738== 
==28738== For counts of detected and suppressed errors, rerun with: -v
==28738== ERROR SUMMARY: 0 errors from 0 contexts (suppressed: 0 from 0)

I also did some research as to what that might mean and came across this answer: Still Reachable Leak detected by Valgrind. I am not completely sure, but basically what it's saying is that it's sort of OK to leave those leaks alone, since they are still reachable, or one should just leave them be since the OS will clean the mess up later (even though that's bad practice). However, that doesn't really apply to my case. I MUST claim that memory back since I need more iterations (10000000+) to get the result I really want and that will most likely result in a crash.

I will gladly post more code if required in order to solve the issue.

Community
  • 1
  • 1
Caio
  • 21
  • 2
  • There is no memory leak in the code you posted. What you are seeing is the compiler's heap management system being smart. Run your program as you intend to -- don't be surprised if it works with no problems. – PaulMcKenzie Jan 28 '16 at 23:35
  • All right! I'll do it. It might take a while though (half a day). I'll post a comment to say whether it worked or not later. – Caio Jan 28 '16 at 23:40
  • Additional options to valgrind will report a backtrace of allocations of the suspected leaks. It's highly likely that the "still reachable" allocations have absolutely nothing to do with your code. – Sam Varshavchik Jan 28 '16 at 23:44
  • Also, you might want to think about re-using the vector (with the memory). Allocating and releasing memory is quite time consuming. – Rumburak Jan 29 '16 at 07:04
  • I found out what I was doing wrong. No it wasn't Valgrind or the heap managment system. It was me. It actually was a mistake in my code. I was creating new elements infinitely in a vector without realizing it. The worst part is that it was actually still reachable in the end since I didn't clear that one vector (it was one the final output vectors). That's why Valgrind was saying the leaks were reachable. In the end, the vector only needed about 1000 elements not the 1 billion it was getting. I will certainly consider Rumburak suggestion. Not sure if it will be possible, but I'll try. – Caio Jan 29 '16 at 16:04

0 Answers0