6

I'm working on a project that uses a lot of templates and libraries like boost:: fusion and found myself struggling with an executable that, as of today, is 600MB and gdb needs 1.7 GB of memory to just load it, and a symbol lookup would make it use nearly 3GBs.. The stripped binary is about 5 MBs.

The question is, is there anything to be done in order to generate smaller debug symbols? This is not only a problem with gdb but also with the linker, which uses another 1.2 GB of RAM when linking objects compiled with the -g flag

I've tried -g1 -g2 and -g3 and the problem remains the same.

Gustavo
  • 195
  • 2
  • 11
  • Tell us something more about your program. What does it do and what is suppose to be doing.... – Mantosh Kumar Mar 24 '14 at 17:20
  • gcc lacks something equivalent to comdat folding last I checked. And symbol name compression would probably also help. – Yakk - Adam Nevraumont Mar 24 '14 at 17:23
  • I think your problem is the 600MB executable (seriously, wtf?), not gcc/gdb's (in)ability to compress debug symbols. – Karoly Horvath Mar 24 '14 at 17:28
  • Most modern IDEs and debuggers deal with template code just fine. – πάντα ῥεῖ Mar 24 '14 at 17:30
  • Maybe you just use an old gcc version. I've been in a similar situation at some point in a specific company, we "upgraded" to gcc4.5 and even if with debug flags the application was 500mo, with optimizations it was close to 2 mo. The application also used tons of metaprogramming. GCC got better at these with time, so check your version. – Klaim Mar 24 '14 at 17:42
  • No, this happens with current gcc versions, from gcc-4.5 upsto 4.8. The program has huge symbols because of template metaprogramming, nested templates and using boost::fusion – Gustavo Mar 24 '14 at 17:47
  • 1
    Well, I routinely debug core-dumps that weigh between 1.5GB and 2GB, and require loading ~100 libraries of a couple MB each, and gdb deals with those without issue (though initial load time might be a bit long). So, is the issue about gdb, or about an under-sized machine ? – Matthieu M. Mar 24 '14 at 18:05
  • Matthieu, yours is a different case, I'm not talking about loading a core dump. And it's not a problem with an under sized machine, but with scalability of the project, since the usage of ram of gdb and ld increase faster than hardware upgrades allow – Gustavo Mar 24 '14 at 18:18

2 Answers2

2

is there anything to be done in order to generate smaller debug symbols?

You can use GNU gold linker with --compress-debug-sections=zlib option instead of default ld linker to compress output debug symbols. Gdb supports compressed debug sections since 7.0 version.

ks1322
  • 33,961
  • 14
  • 109
  • 164
  • 1
    Nice, I didn't know the gold linker, but I got "collect2: error: ld terminated with signal 11 [Segmentation fault], core dumped" when I tried using it. (version 2.23 from Ubuntu 13.10 repositories). I will try compiling latest version. – Gustavo Mar 25 '14 at 00:02
  • Nope, compiled latest version (2.24) from source and it's still coredumping when using the --compress-debug-sections=zlib flag – Gustavo Mar 25 '14 at 00:34
  • @Gustavo I'm also having gold segfault when trying --compress-debug-sections=zlib. Were you ever able to get this to work? Do you know if there is a bug report for gold? – acm Jul 12 '14 at 20:21
  • @acm no, sorry, I didn't try it again. – Gustavo Jul 14 '14 at 01:26
1

As a partial solution you can compile with -g option only few source files. Or use strip utility on .o-files that do not need to be debugged yet.

If the issue is just the executable file size, you can use this approach to make it smaller without loosing debug info.

Community
  • 1
  • 1
qehgt
  • 2,972
  • 1
  • 22
  • 36