I am trying to load core dump (which is quite big like 30 GB), of a program in gdb, but its not being loaded fully I believe. After a lot of research I could not find any thing relevant so I am asking here. Initially you get warnings for something missing. I am using gdb 8 and 11, both give the same results.
Warnings:
Reading symbols from svc...
warning: Can't open file /app/svc during file-backed mapping note processing
warning: Can't open file /SYSVafaf1240 (deleted) during file-backed mapping note processing
warning: Can't open file /SYSVafaf123f (deleted) during file-backed mapping note processing
warning: Can't open file /SYSVafaf123e (deleted) during file-backed mapping note processing
warning: Can't open file /SYSVafaf123d (deleted) during file-backed mapping note processing
warning: Can't open file /SYSVafaf123c (deleted) during file-backed mapping note processing
warning: Can't open file /SYSVafaf123b (deleted) during file-backed mapping note processing
warning: Can't open file /SYSVafaf123a (deleted) during file-backed mapping note processing
warning: Can't open file /opt/lib64/libpq.so.5 during file-backed mapping note processing
warning: Can't open file /opt/lib64/libresolv.so.2 during file-backed mapping note processing
warning: Can't open file /opt/lib64/librabbitmq.so.4 during file-backed mapping note processing
warning: Can't open file /opt/lib64/libboost_random.so.1.67.0 during file-backed mapping note processing
warning: core file may not match specified executable file.
[New LWP 1]
[New LWP 26]
[New LWP 9]
[New LWP 65]
[New LWP 10]
--Type <RET> for more, q to quit, c to continue without paging--
And then in backtrace you get something like this, that you can see is not complete.
Back trace Sample:
Core was generated by `./service'.
Program terminated with signal SIGSEGV, Segmentation fault.
#0 0x00000000004a11b0 in nlohmann::basic_json<std::map, std::vector, std::string, bool, long, unsigned long, double, std::allocator, nlohmann::adl_serializer>::create<std::string, char const (&) [48]> (args#0=...) at src/commons/nlohmann/json.hpp:11937
11937 std::unique_ptr<T, decltype(deleter)> object(AllocatorTraits::allocate(alloc, 1), deleter);
[Current thread is 1 (LWP 1)]
Thread 49 (LWP 64):
#0 0x00007f53de7ccccd in ?? ()
#1 0x0000000000000000 in ?? ()
Thread 48 (LWP 45):
#0 0x00007f53e0218a35 in ?? ()
#1 0x0000000000000000 in ?? ()
Thread 47 (LWP 44):
#0 0x00007f53e0218a35 in ?? ()
#1 0x0000000000000000 in ?? ()
Thread 46 (LWP 42):
#0 0x00007f53e0218a35 in ?? ()
#1 0x0000000000000000 in ?? ()
Thread 45 (LWP 39):
#0 0x00007f53e0218a35 in ?? ()
#1 0x0000000000000000 in ?? ()
Thread 44 (LWP 37):
#0 0x00007f53e0218a35 in ?? ()
#1 0x0000000000000000 in ?? ()
There are multiple threads working when program segfault, which is not possible to know by the backtrace given by the gdb. Can you please help regarding this?
Update: This coredump.conf file on the machine the core dump is create is this:
[Coredump]
#Storage=external
#Compress=yes
#ProcessSizeMax=2G
#ExternalSizeMax=2G
#JournalSizeMax=767M
#MaxUse=
#KeepFree=
Is it a issue?