1

I keep crashing gcc (due to lack of memory) when trying to compile a class which is using many function templates defined in the corresponding .h file for my class. I have 6.4GB of memory available when I start compilation of that cpp file:

$ free -mh
                  total        used        free      shared  buff/cache   available
    Mem:           9.7G        3.1G        6.5G        260K        212M        6.4G
    Swap:          947M        887M         59M

Here's details of gcc:

$ gcc -v
Using built-in specs.
COLLECT_GCC=gcc
COLLECT_LTO_WRAPPER=/usr/lib/gcc/x86_64-linux-gnu/7/lto-wrapper
OFFLOAD_TARGET_NAMES=nvptx-none
OFFLOAD_TARGET_DEFAULT=1
Target: x86_64-linux-gnu
Configured with: ../src/configure -v --with-pkgversion='Ubuntu 7.3.0-16ubuntu3' --with-bugurl=file:///usr/share/doc/gcc-7/README.Bugs --enable-languages=c,ada,c++,go,brig,d,fortran,objc,obj-c++ --prefix=/usr --with-gcc-major-version-only --with-as=/usr/bin/x86_64-linux-gnu-as --with-ld=/usr/bin/x86_64-linux-gnu-ld --program-suffix=-7 --program-prefix=x86_64-linux-gnu- --enable-shared --enable-linker-build-id --libexecdir=/usr/lib --without-included-gettext --enable-threads=posix --libdir=/usr/lib --enable-nls --with-sysroot=/ --enable-clocale=gnu --enable-libstdcxx-debug --enable-libstdcxx-time=yes --with-default-libstdcxx-abi=new --enable-gnu-unique-object --disable-vtable-verify --enable-libmpx --enable-plugin --enable-default-pie --with-system-zlib --with-target-system-zlib --enable-objc-gc=auto --enable-multiarch --disable-werror --with-arch-32=i686 --with-abi=m64 --with-multilib-list=m32,m64,mx32 --enable-multilib --with-tune=generic --enable-offload-targets=nvptx-none --without-cuda-driver --enable-checking=release --build=x86_64-linux-gnu --host=x86_64-linux-gnu --target=x86_64-linux-gnu
Thread model: posix
gcc version 7.3.0 (Ubuntu 7.3.0-16ubuntu3)

Here's the memory situation right before the crash while compiling that file:

$ free -mh
              total        used        free      shared  buff/cache   available
Mem:           9.7G        9.6G        113M        352K         58M        816K
Swap:          947M        947M          0B

And the crash details:

c++: internal compiler error: Killed (program cc1plus)
Please submit a full bug report,
with preprocessed source if appropriate.
See <file:///usr/share/doc/gcc-7/README.Bugs> for instructions.
main.dir/build.make:127: recipe for target 'myclass.cpp.o' failed

Is there anything I can do to avoid this (besides removing the templates)? I personally think 6.4GB should be enough memory to compile a .cpp file regardless of how complicated the file. Am I wrong here in this thinking?

EDIT. Here's the function prototypes in the .hpp:

typedef std::shared_ptr<Object> ObjectPtr;

template<typename MapTypeT, typename KeyTypeT, typename ValueTypeT>
ObjectPtr InitMap(KeyTypeT key, ValueTypeT value, std::list<std::pair<ObjectPtr, ObjectPtr>> keyValuePairs); 
template<typename KeyTypeT, typename ValueTypeT>

ObjectPtr ConstructMap(KeyTypeT key, ValueTypeT value, std::list<std::pair<ObjectPtr, ObjectPtr>> keyValuePairs); // calls InitMap<std::unordered_map> and InitMap<std::map>

template<typename KeyTypeT>
ObjectPtr DeduceValue(KeyTypeT key, const ObjectPtr &anyValue, const std::list<std::pair<ObjectPtr, ObjectPtr>> &keyValuePairs); // calls ConstructMap<KeyTypeT, ValueTypeT> 

ObjectPtr CreateMap(std::list<std::pair<ObjectPtr, ObjectPtr>> keyValuePairs); // calls DeduceValue<KeyTypeT>

EDIT2.

Here's proof that gcc is actually using 6GB for compilation:

Tasks: 295 total,   4 running, 223 sleeping,   0 stopped,   0 zombie
%Cpu(s): 57.0 us,  2.3 sy,  0.0 ni, 40.7 id,  0.0 wa,  0.0 hi,  0.0 si,  0.0 st
KiB Mem : 10208840 total,   138780 free,  9850648 used,   219412 buff/cache
KiB Swap:   969960 total,       28 free,   969932 used.   104132 avail Mem 

   PID USER      PR  NI    VIRT    RES    SHR S  %CPU %MEM     TIME+ COMMAND    
 91917 mfonnem+  20   0 6381472 6.029g  14936 R  99.3 61.9   2:50.04 cc1plus    
 90603 mfonnem+  20   0 5300888 1.243g  16796 S   8.6 12.8   5:42.14 java       

EDIT 3. I increased the swap memory to 8GB and now gcc uses just under 8GB while trying to compile that file:

top - 00:13:27 up  5:25,  1 user,  load average: 2.55, 2.02, 1.92
Tasks: 298 total,   3 running, 226 sleeping,   0 stopped,   0 zombie
%Cpu(s):  3.9 us, 16.5 sy,  0.0 ni, 13.4 id, 42.9 wa,  0.0 hi, 23.4 si,  0.0 st
KiB Mem : 10208832 total,   111252 free, 10015484 used,    82096 buff/cache
KiB Swap:  8388604 total,  3069856 free,  5318748 used.     7256 avail Mem 

   PID USER      PR  NI    VIRT    RES    SHR S  %CPU %MEM     TIME+ COMMAND    
 25578   20   0 9071708 7.498g    412 R  27.2 77.0   5:27.26 cc1plus  

Eventually, it reports:

internal compiler error: Segmentation fault
     }
     ^
Please submit a full bug report,
with preprocessed source if appropriate.
See <file:///usr/share/doc/gcc-7/README.Bugs> for instructions.

EDIT4. Compilation in clang++ is possible.

markf78
  • 597
  • 2
  • 7
  • 25

2 Answers2

5

I personally think 6.4GB should be enough memory to compile a .cpp file regardless of how complicated the file. Am I wrong here in this thinking?

Yes. C++ templates happen to be "accidentally" Turing-complete (see this and read Todd L. Veldhuizen C++ Templates are Turing Complete paper).

So you are wrong. Pathological C++ programs can take an arbitrary (or even infinite) amount of time and memory to be compiled (and coding such pathological C++ programs is quite easy).

In other words, C++ program writers should use templates with great care and parsimony. It is quite easy to write an explosive template (with combinatorial explosion at template expansion time). When you code templates, you need to convince yourself (and ideally, to prove) that their expansion takes linear (or bounded, or at least reasonable) time and space so you should estimate the time and space complexity of the template expansion.

BTW 8 gigabytes of RAM is not a lot today. Consider buying more RAM, increasing your swap space, closing every non-essential applications (IDEs, word processors, web browsers, JVMs, ...), disabling debugging information, decreasing optimization level, when compiling that stuff (and compile it on the command line). Perhaps upgrading to GCC 8 might help also. Try perhaps also Clang.

And you could have bugs in your templates (e.g. improper or "infinite" recursive or excessive expansion).

You may want to use GCC C++ options like -ftemplate-depth (to catch template excessive depth).

Basile Starynkevitch
  • 223,805
  • 18
  • 296
  • 547
  • Disabling debug does a lot more for decreasing memory usage than removing optimizations. – xaxxon Sep 04 '18 at 17:56
  • @Basile Starynkevitch This was not meant as an attack on gcc. I realize compilation is difficult as I've written non-production level compilers before. But, 6GB for a single object file just seems like a lot of memory to me. Btw, the templates are not recursively calling themselves. – markf78 Sep 04 '18 at 18:07
  • 1
    Please don’t comment on your belief of what is going on. Instead post an [mcve] – xaxxon Sep 04 '18 at 18:09
  • @xaxxon it's not a belief I posted evidence that gcc is using 6GB of memory at the time of the crash. please see the output from top. – markf78 Sep 04 '18 at 18:16
  • You did not yet post any [MCVE]. We need it – Basile Starynkevitch Sep 04 '18 at 18:18
  • 2
    @markf78 the true beauty of the MCVE is you'll probably get it half done, see the problem, give yourself a slap, and then fix the problem yourself. A good MCVE can be brutal and time-consuming, but damn are they effective. – user4581301 Sep 04 '18 at 18:26
  • @xaxxon You can delete the question if you like; I don't think I have the privileges to do it myself. btw, I am now able to compile in clang but not gcc. Unable to duplicate In a MCV though. – markf78 Sep 05 '18 at 16:20
1

I have normally found that adding more swap space generally fixes issues with GCC. See here for how to add swapspace -- Although the price you pay is a compile/link times that can take hours. As for why, I suspect the "mangling rules" for templates is the killer factor, although it's hard prove. Dumping the symbols (using -Q flag) might help you understand better.

Ideas for fixing the build

  • You might find that moving symbols into an anonymous namespace speeds things up (removes them from the exported symbols)
  • You also might find a UnityBuild approach helps, by doing more work while the symbols are created.

Sorry for not being more help, but it might you give you ideas. If you find anything interesting with "-Q", please post the results.

Guest
  • 11
  • 1