9

I am using Visual Studio, and it seems that getting rid of unused references and using statements speeds up my build time on larger projects. Are there other known ways of speeding up build time. What about for other languages and build environments?

What is typically the bottleneck during build/compile? Disk, CPU, Memory?

What is a list of/are good references for distributed builds?

esac
  • 24,099
  • 38
  • 122
  • 179
  • 3
    Actually this is not language-agnostic. Different languages have different requirements for a compiler, which affects compile time significantly, – Brian Rasmussen Oct 08 '09 at 17:45
  • True, but my question is more of how to improve performance in different compilers, environments, languages, etc.. I generally use Visual Studio, but sometimes msbuild, perl, etc.. – esac Oct 08 '09 at 18:21

12 Answers12

6

The biggest improvement we made for our large C++ project was from distributing our builds. A couple of years ago, a full build would take about half an hour, while it's now about three minutes, of which one third is link time.

We're using a proprietary build system, but IncrediBuild is working fine for a lot of people (we couldn't get it to work reliably).

Gautam
  • 7,868
  • 12
  • 64
  • 105
Sebastiaan M
  • 5,775
  • 2
  • 27
  • 28
3

Fixing your compiler warnings should help quite a bit.

Robert Greiner
  • 29,049
  • 9
  • 65
  • 85
  • Only if he uses the "cl" command line tool. Writing stdout into a file is not slow - even on Windows. – Lothar Oct 08 '09 at 18:14
  • I'm just curious if this is simply due to writing them to a logfile that causes the slowdown, so if you have few warnings, or there is no bottleneck, it will not really matter? – esac Oct 08 '09 at 18:24
  • 2
    any link to back this up? our program generates 5000+ compiler warnings and i have been looking for a good excuse to remedy that for quite some time – Chris Shouts Oct 08 '09 at 19:08
  • Yeah I am pretty curious about that one, does any benchmark have been done on this point? – Drahakar Jan 31 '12 at 05:01
  • @ChrisShouts How does that even happen? Not sure if you still work there, but do you know if this has been fixed? – Lysol Jan 21 '16 at 19:00
  • @AidanMueller I don't work there anymore, but to the best of my knowledge it has not been fixed. It scarred me for life though; I turn on "Treat warnings as errors" in every new project I create. – Chris Shouts Jan 21 '16 at 22:11
3

Buy a faster computer

ParmesanCodice
  • 5,017
  • 1
  • 23
  • 20
  • This isn't always the case. I have a project at work that I just upgraded from 2.83GHz last-gen to 3.2GHz processors, both quad core. I double the amount of memory from 8GB to 16GB. I switched from RAID0 7200RPM to RAID0 15K SAS, and I still do not see an improvement in build time. There seem to be other factors to take into consideration. – esac Oct 08 '09 at 18:30
  • This will only help a bit. Distribution (see my answer) will give you many more cycles. We went from 3GHz to around 900GHz when going distributed. :) Regards, Sebastiaan – Sebastiaan M Oct 09 '09 at 11:19
  • I guess upgrade from 1 core to 8 core would improve the performance a lot - Maybe disk I/O would be the bottleneck in this case. – Baiyan Huang Jan 12 '10 at 10:31
3

At my previous job we had big problems with compilation time and one of the strategies we used was called the Envelope pattern see here.

Basically it attempts to minimize the amount of code copied in headers by the pre-processor by minimizing header size. It did this by moving anything that wasn't public to a private friend class, here's an example.

foo.h:

class FooPrivate;
class Foo
{
public:
   Foo();
   virtual ~Foo();
   void bar();
private:
   friend class FooPrivate;
   FooPrivate *foo;
};

foo.cpp:

Foo::Foo()
{
   foo = new FooPrivate();
}

class FooPrivate
{
    int privData;
    char *morePrivData;
};

The more include files you do this with the more it adds up. It really does help your compilation time.

It does make things difficult to debug in VC6 though as I learned the hard way. There's a reason it's a previous job.

ReaperUnreal
  • 970
  • 7
  • 19
  • If you aren't happy with what this solution did to your maintenance cycle, then why suggest it at all? – Will Bickford Oct 08 '09 at 18:51
  • As a warning. I'm suggesting it so that people know it exists, and avoid it at all costs. It does work, it does reduce compile time, but like I said, debugging is nearly impossible on VC6. – ReaperUnreal Oct 14 '09 at 18:27
2

Please read this book. It's pretty good on the topic of physical structure your project into different files with minimal rebuilds.

Unforunately it was written before templates became that important. The templates are the real time killer when it comes to C++ compilation. Especialls if you make the mistake and use smart pointers everywhere. In this case you can only constanly upgrade to the latest CPU and recent SSD drives. MSVC is already the fastests existing C++ compiler if you use precompiled headers.

http://ecx.images-amazon.com/images/I/51HNJ7KBBAL._BO2,204,203,200_PIsitb-sticker-arrow-click,TopRight,35,-76_AA240_SH20_OU01_.jpg

Glorfindel
  • 21,988
  • 13
  • 81
  • 109
Lothar
  • 12,537
  • 6
  • 72
  • 121
2

If you're using a lot of files and a lot of templated code (STL / BOOST / etc.), then Bulk or Unity builds should cut down on build and link times.

The idea of Bulk Builds to break your project down into subsections and include all the CPP files in that subsection into a single file. Unity builds take this further by having a Single CPP file that is compiled that inludes all other CPP files.

The reason this is often faster is:

1) Templates are only evaluated once per Bulk File

2) Include files are opened / processed only once per Bulk File (assuming there is a proper #ifndef FILE__FILENAME__H / #define FILE__FILENAME__H / #endif wrapper in the include file). Reducing total I/O is a good thing for compile times.

3) The linker has much less data to work with (Single Unity OBJ file or several Bulk OBJ files) and is less likely to page to virtual memory.

EDIT Adding a couple of links here on stack overflow about Unity Builds.

Community
  • 1
  • 1
Adisak
  • 6,708
  • 38
  • 46
2

Be wary of broad-sweeping "consider this directory and all subdirectories for header inclusion" type settings in your project. This will cause the compiler to have to iterate every directory until the header file requested is found, and can be a very expensive operation for however many headers you include in your project.

fbrereto
  • 35,429
  • 19
  • 126
  • 178
1

Visual Studio supports parallel builds, which can help, but the true bottleneck is Disk IO.

In C for instance - if you generate LST files your compile will take ages.

sylvanaar
  • 8,096
  • 37
  • 59
  • A listing file. Its an output of assembly instructions for the given source code. Many times it also includes the source code text inline with the assembly so you can see what instructions are associated with which lines of source code. – sylvanaar Oct 26 '09 at 09:13
0

Don't compile with debug turned on.

Taylor Leese
  • 51,004
  • 28
  • 112
  • 141
  • 1
    This is a bad suggestion. Debug builds are often necessary. In fact, most of your builds will probably end up being debug builds because that's the mode you're most likely using as you develop. – Joseph Garvin Oct 25 '09 at 17:43
  • 1
    I didn't say it isn't useful to compile with w/ debug turned on. The question asked how to improve compile time only. – Taylor Leese Oct 25 '09 at 23:41
0

For C++ the mayor bottleneck is the disk I/O. Many headers include other headers back and forth, which causes a lot of files to be opened and read through for each compilation unit.

You can reach significant improvement if you move the sources into a RAM-disk. Even more if you ensure that your source files read through exactly once.

So for new projects I began to include everything into a single file I call _.cpp. It's structure is like this:

/* Standard headers */
#include <vector>
#include <cstdio>
//...

/* My global macros*/
#define MY_ARRAY_SIZE(X) (sizeof(X)/sizeof(X[0]))

// My headers
#include "foo.h"
#include "bar.h"
//...

// My modules
#include "foo.cpp"
#include "bar.cpp"

And I only compile this single file.

My headers and source files does not include anything, and use namespaces to avoid clashes with other modules.

Whenever my program misses something, I add its header and source into this module only.

This way each source file and header is read exactly once, and builds very quickly. Compile times increase only linearly as you add more files, but not quadratically. My hobby project is about 40000 loc and 500 modules but still compiles about 10-20 seconds. If I move all sources and headers into a RAM-disk compile time reduces to 3 seconds.

Disadvantage of this, that existing codebases are quite difficult to refactor to use this scheme.

Calmarius
  • 18,570
  • 18
  • 110
  • 157
0

For C# - Using fixed versions for your assemblies instead of auto incremental ones greatly speeds up subsequent local builds.

assemblyinfo.cs

// takes longer to update version number changes on all references to this assembly
[assembly: AssemblyVersion("1.0.*")]

// this way already built assemblies do not need to be recompiled 
// to update version number changes.
[assembly: AssemblyVersion("1.0.0.0")]
Vedran
  • 10,369
  • 5
  • 50
  • 57
0

Compilation Time and brittle base class problem : I have written a blog on a way to improve compilation time in C++. Link.