2

I know the usual approach in developing C or C++ applications is to make each modules as separate compilation units, put the definitions within them, and headers for the declarations. With good link time optimization support, such strategy seems with no problem, but in my experience LTO currently possible with gcc or g++ isn't that great.

So my idea is (well of course not my own idea, I know some projects already share the same idea), define everything in the header while making the bare functions all static, use include guards, and in the final stage, have only one compilation unit so that the compiler after preprocessing actually deals with a whole big source file. I will have to avoid circular references to achieve this, but I never found circular references necessary; there were always some ways to make them hierarchical.

So given that my project is not too big that the boosted compilation time is bearable, is my idea 'usable'? What are some other problems I may be missing? My project by the way is very performance sensitive; one additional second to save is very important. I am asking this question because I know my approach is not a usual one and I couldn't find enough resources, information, or peoples opinion with Google search. Any help appreciated.

  • 1
    Reading compilation errors from one huge compilatio unit (even worse if created dynamically) will be a big pain. – tumdum Jan 12 '15 at 08:01
  • 2
    If it works for you, go for it. I wouldn't use it since I value maintainability. – paxdiablo Jan 12 '15 at 08:05
  • 1
    If you put all code in header files and only have a single translation unit, then if you make a small and tiny change in one header file *all* the code have to be recompiled, instead of just the affected source file. – Some programmer dude Jan 12 '15 at 08:06
  • 1
    If your project really is performance sensitive, you should probably spend your time writing the best performing code you can. Stuff like LTO may just about give you a tiny little improvement on top of everything else, but you should only worry about that when your code is already *really* efficient in my opinion. – Henrik Jan 12 '15 at 08:10
  • Related to [the-benefits-disadvantages-of-unity-builds](http://stackoverflow.com/questions/847974/the-benefits-disadvantages-of-unity-builds) – Jarod42 Jan 12 '15 at 08:12
  • Any project having some success will grow, probably quite a bit. Using a build strategy based on just one translation unit seems to set up limitations from the start. The only scenario where it makes sense is wgen expecting to fail. I'd not start if I expect to fail... – Dietmar Kühl Jan 12 '15 at 08:26
  • There is a lot of methods for speeding up builds - what you propose (single compilation unit aka "Unity Build") is one of them. Others are, e.g. removing unnecessary headers - there is some tool support for it, for example include-what-you-use and deheader - on Linux also switching from make to ninja/distcc, ld to gold, I also heard that warp is a faster version of C++ preprocessor. Of course, reducing dependencies help. And using faster machines. If you make small changes, look at ccache. Internet is full of advice - but don't trust it. Measure it. – Tomasz Lewowski Jan 12 '15 at 20:48

0 Answers0