0

I have been using libcurl to handle get/post requests and valgrind always shows me the same amount of reachable blocks (21). What I missing in the standard cURL's configuration to avoid that?

This is the simplest code with the "error":

#include <stdio.h>
#include <stdlib.h>
#include <curl/curl.h>

int main(int argc, char const *argv[]) {

  CURL *curl;

  curl_global_init(CURL_GLOBAL_ALL);
  curl = curl_easy_init();
  ·
  ·
  ·
  curl_easy_cleanup(curl);
  curl_global_cleanup();

  return 0;

}

I compile with

$ gcc -o test cURL.c -lcurl

Valgrind check

$ valgrind --leak-check=full --track-origins=yes ./test

==4295== Memcheck, a memory error detector
==4295== Copyright (C) 2002-2017, and GNU GPL'd, by Julian Seward et al.
==4295== Using Valgrind-3.13.0 and LibVEX; rerun with -h for copyright info
==4295== Command: ./test
==4295== 
==4295== 
==4295== HEAP SUMMARY:
==4295==     in use at exit: 858 bytes in 21 blocks
==4295==   total heap usage: 4,265 allocs, 4,244 frees, 185,353 bytes allocated
==4295== 
==4295== LEAK SUMMARY:
==4295==    definitely lost: 0 bytes in 0 blocks
==4295==    indirectly lost: 0 bytes in 0 blocks
==4295==      possibly lost: 0 bytes in 0 blocks
==4295==    still reachable: 858 bytes in 21 blocks
==4295==         suppressed: 0 bytes in 0 blocks
==4295== Reachable blocks (those to which a pointer was found) are not shown.
==4295== To see them, rerun with: --leak-check=full --show-leak-kinds=all
==4295== 
==4295== For counts of detected and suppressed errors, rerun with: -v
==4295== ERROR SUMMARY: 0 errors from 0 contexts (suppressed: 0 from 0)
OskrD
  • 5
  • 3
  • May those be a bug? Re-run with `--leak-check=full --show-leak-kinds=all` and possibly fix them/report them to upstream – mgarciaisaia Jul 24 '18 at 16:51
  • It is not unusual that libraries allocate some pseudo-static buffers with malloc when a function is called first. These are often not freed at exit, but they are technically not memory-leaks. curl might do that itself or indirect through calls to other libraries (i.e. libcrypto/libssl etc). Since they do not really leak memory, you can usually safely ignore that – Ctx Jul 24 '18 at 17:44
  • Thanks, in fact with more detailed flags, valgrind tracks the blocks to that libraries. I was just wondering if I could freed those blocks, although it actually isn't a memory leak, just like you said @Ctx . – OskrD Jul 24 '18 at 22:13

1 Answers1

2

libcurl links against many libraries, and some of them do not have a function like curl_global_cleanup which reverts initialization and frees all memory. This happens when libcurl is linked against NSS for TLS support, and also with libssh2 and its use of libgcrypt. GNUTLS as the TLS implementation is somewhat cleaner in this regard.

In general, this is not a problem because these secondary libraries are only used on operating systems where memory is freed on process termination, so an explicit cleanup is not needed (and would even slow down process termination). Only with certain memory debuggers, the effect of missing cleanup routines is visible, and valgrind deals with this situation by differentiating between actual leaks (memory to which no pointers are left) and memory which is still reachable at process termination (so that it could have been used again if the process had not terminated).

Florian Weimer
  • 32,022
  • 3
  • 48
  • 92
  • Ok, so in this case, Isn't anything missing? Is it something inherent to libcurl? Before ask, I read that is not a real [error] (https://stackoverflow.com/a/3857638/10090729), but I just want to be sure I'm not missing anything there. – OskrD Jul 24 '18 at 22:33
  • libcurl is special because it provides that curl_global_cleanup function. Most libraries do not. The OS will free memory on process exit anyway, so many libraries do not bother. – Florian Weimer Jul 24 '18 at 23:04