0

I know this issue has been addressed in few questions on stackoverflow. Experts have mentioned that the report "still reachable" is not a memory leak. They just demand to be freed and they're completely a programmer's choice. But the problem in my case is that I am still not able to identify the pointer(s) that must be freed.

Valgrind clearly shows that there is no loss of memory but still when I have kept my program integrated with rest of the code of zabbix running for 3 days, I have noticed that memory has dropped from 2.75 GB to 2.05 GB (my computer has 4 GB of RAM allocated).

==13630== LEAK SUMMARY:
==13630==    definitely lost: 0 bytes in 0 blocks
==13630==    indirectly lost: 0 bytes in 0 blocks
==13630==      possibly lost: 0 bytes in 0 blocks
==13630==    still reachable: 15,072 bytes in 526 blocks
==13630==         suppressed: 0 bytes in 0 blocks

I want to show you the whole code that's why I am not pasting it here. Please click here to have a look into the code which will work in eclipse CDT.

Purpose of the code: Basically the code has been re-written by me to allow zabbix server to get system value from zabbix agent installed in a remote machine. This code which I have pasted, creates a file in a directory "/ZB_RQ" of the remote host with a request "vm.memory.size[available]" and zabbix agent writes the proper value back in this file with a prefix "#" to distinguish request from response. In this code, I have considered localhost "127.0.0.1" as a remote host for testing. This program works by using a user "zabbix" with password "bullet123", which you would find out from the code itself. This complete operation is carried out using libssh2 API.

Before you get started: Please create a directory "/ZB_RQ", a user "zabbix" with password "bullet123" and install libssh2-devel on your Linux machine.

How to get this program completely working: When you run this program with valgrind, (after execution of the function "sendViaSFTP") you would find that there is a file created in the directory "/ZB_RQ". The program waits for 3 seconds to get the value back in the same file. If not found, the program will create a new file with same expectation. So, within 3 seconds, from an another terminal, you have to write a sample response into the file (say "#test"). And, thus you could know the whole execution.

So, the moment you kill the whole execution (ctrl + c), the valgrind will show the above result with a very long list of "reachable blocks".

I have made sure to free every libssh2 variables but still I could not figure out why there is a continuous drop in the memory. Is this happening due to piling up of "reachable blocks" ?

If I consider, this is not going to consume all memory, then anyway, please help me to get away from the "reachable blocks".

Rohit
  • 604
  • 1
  • 10
  • 25
  • See http://stackoverflow.com/a/3857638/93747 – Daniel Stenberg Sep 09 '14 at 06:08
  • Yes, I have read that before. I have referenced this in my question that this "reachable blocks" are not a potential issue as mentioned in other questions. But still I don't know why I am losing memory. And, that is why I want to get away from this reachable blocks on the first place. – Rohit Sep 09 '14 at 06:16
  • "still reachable" blocks are quite simply almost never leaks so you should go on looking for your problems elsewhere – Daniel Stenberg Sep 09 '14 at 06:23
  • Ok, considering this is not a potential memory leak. But is there a way to get away from these blocks? I just want to learn. – Rohit Sep 09 '14 at 07:00
  • I guess, "still reachable" is a trouble for a long-lived process. The day I started my application, I noticed the RSS value of the process around 3 MB and since then, it has been still increasing. After 2 weeks, the RSS value of the same process has increased to 30 MB. I have run my application under valgrind for continuous two days and found no such "definitely or possible leaks". I tried harder to change my codes, but still I have no clue!! :( – Rohit Dec 10 '14 at 10:39

0 Answers0