If I run my app with valgrind on Linux, I get similar stack traces, like this:
==20124== 524,288 bytes in 1 blocks are still reachable in loss record 1,065 of 1,065
==20124== at 0x4C33D2F: realloc (in /usr/lib/valgrind/vgpreload_memcheck-amd64-linux.so)
==20124== by 0x77C999: bson_realloc (bson-memory.c:154)
==20124== by 0x77CA04: bson_realloc_ctx (bson-memory.c:194)
==20124== by 0x75E162: _mongoc_buffer_fill (mongoc-buffer.c:254)
==20124== by 0x73324D: mongoc_stream_buffered_readv (mongoc-stream-buffered.c:240)
==20124== by 0x733D09: mongoc_stream_readv (mongoc-stream.c:237)
==20124== by 0x733E45: mongoc_stream_read (mongoc-stream.c:281)
==20124== by 0x75DF38: _mongoc_buffer_append_from_stream (mongoc-buffer.c:200)
==20124== by 0x6FB71C: mongoc_cluster_run_opmsg (mongoc-cluster.c:3468)
==20124== by 0x6F5A7B: mongoc_cluster_run_command_monitored (mongoc-cluster.c:544)
==20124== by 0x70A27A: _mongoc_cursor_run_command (mongoc-cursor.c:1052)
==20124== by 0x70BAA1: _mongoc_cursor_response_refresh (mongoc-cursor.c:1673)
==20124== by 0x70D09C: _prime (mongoc-cursor-find-cmd.c:36)
==20124== by 0x70CEA4: _prime (mongoc-cursor-find.c:61)
==20124== by 0x70A73E: _call_transition (mongoc-cursor.c:1204)
==20124== by 0x70A962: mongoc_cursor_next (mongoc-cursor.c:1280)
==20124== by 0x6528F0: mongocxx::v_noabi::cursor::iterator::operator++() (cursor.cpp:45)
==20124== by 0x652C02: mongocxx::v_noabi::cursor::iterator::iterator(mongocxx::v_noabi::cursor*) (cursor.cpp:80)
==20124== by 0x652B2D: mongocxx::v_noabi::cursor::begin() (cursor.cpp:67)
==20124== ...
Valgrind command:
valgrind --tool=memcheck --leak-check=full --show-leak-kinds=all --track-origins=yes --num-callers=200 ./
The problem is that it is here after request is served out and according to it, the memory consumption won't decrease to the initial state. The piece of code where it is come from:
mongocxx::cursor cursor = collection.find(document{} << "m" << matrixId << finalize);
for(auto&& m : cursor) {
Some additional info:
- I use connection pooling and the acquire() method is used to get a client entry
- The initial implementation served every request on new thread, so thread create - exit guards the request execution. The issue raised when fixed number of thread pool is deployed and requests are served by these threads, so when thread exit after serving out the request, the issue gone away.
Can anybody help me, what is going wrong?