3

I've been playing around with some large visual studio C++ projects, and it seems that more time is spent building the precompiled header than the individual source files.

I've since made some changes to the project itself (enabled the /MP flag and set the max number of jobs in "Tools ==> Options"), and the builds seem to be about 10% faster, but not nearly as great an improvement as the Linux versions of the same projects, which run nearly 4-5x faster when specifying the -j option in make.

First, are there any other options that need to be set to take advantage of multi-core systems for improving build speed, particularly with generating the precompiled header?

Second, it seems that by enabling multi-processor support, I can no longer do "incremental builds". If I understand this correctly, each "Build" would be the same as a full "Rebuild" or "Clean, Build" operation. Is this so? Last I checked, GNU makefile projects don't suffer from this limitation if the makefile is written properly, so it seems odd such a modern and expensive tool as Visual Studio would suffer from such an issue.

Thank you.

Cloud
  • 18,753
  • 15
  • 79
  • 153
  • 1
    From my personal experience, it's usually the linker that takes forever. So I usually disable IPO, whole program optimization, and link time code generation. This moves all the "expensive" steps into the highly parallelizable main compilation. Only for production builds do I turn all that stuff back on. – Mysticial Aug 15 '15 at 21:47

2 Answers2

4

I have been investigating this problem over the last week. It turns out that the underlying "msbuild" tool with parallel building enabled ("/m:njobs") can build projects in parallel, but the individual tasks within a project are always serial. Given the dependencies between projects, it often means that the amount of parallelisation opportunities are quite limited.

I use CMake to generate the solution and project files which means it's possible to compare the same build using generators for different build systems. I've been using the Ninja build tool, which can make much better use of parallelisation. Monitoring the use of all CPU cores with the resource monitor shows that MSBuild uses 1 core, sometimes 2. Ninja pegs all 8 cores to the limit for the vast majority of the build on my workstation. For my code, this translates to 125 mins with msbuild vs 45 mins with Ninja on a 24-core build node--the fact that it's not a 24x speedup is that the unit tests take up most of that time and are not parallellised, but I did see it peg all 24 cores during the build unlike msbuild.

So the general take home message is that Visual Studio/msbuild support for effective parallelisation is quite limited, and if you want to make your builds as fast as possible, you will have to look elsewhere for better tools. I found CMake/Ninja to be an effective alternative.

Roger Leigh
  • 479
  • 4
  • 10
  • Do the cmake/ninja tools work on windows, and well? May I please have links to them? – Cloud Aug 15 '15 at 23:27
  • For what it's worth, just recently I've been hearing about Visual Studio 2015 having the ability to build files in parallel within projects. – TheUndeadFish Aug 16 '15 at 06:49
  • For CMake, see http://www.cmake.org/ For Ninja, see https://martine.github.io/ninja/ and for downloads https://github.com/martine/ninja/releases – Roger Leigh Aug 16 '15 at 07:26
  • VS has had the ability to build things in parallel with projects for quite a while. But, there are number of options that cause it to be disabled by default. So you need to play with the settings to get it to actually work. From my experience, this involves turning on `/MP`, disabling whole program optimization, and disabling link-time code generation. – Mysticial Aug 20 '15 at 17:44
  • @Mysticial that seems to only affect what Roger said in his answer -- the number of parallel projects, not files within a single project. – xaxxon May 01 '16 at 06:14
2

Visual Studio has two types of parallel builds: within a project, and between projects.

Within a project, it will run multiple copies of the C++ compiler (once the precompiled header is built) when the "Multi-Processor Compilation" option (/MP) is enabled. The linker runs single threaded, but I haven't tried Link Time Code Generation to know what that does. You won't see much benefit from this if your project has few files in it.

The other parallel building it does is multiple projects concurrently, and is set using Tools => Options. This will build multiple C++, C#, and other types of project simultaneously, as long as project dependencies allow it (i.e., two projects can be built at the same time if they don't depend on each other).

I've successfully used one or the other, depending on the content and size of my solution. Enabling both can be beneficial, but you might also overbook the available CPU threads for your system. If your solution consists of all or mostly all C++ projects, pick one or the other. Having 8 projects building, each trying to use 8 processes, on an 8 core system can be a bit taxing.

To reduce the time for the precompiled header to build, you can define the WIN32_LEAN_AND_MEAN macro before including the windows headers to exclude rarely used stuff. There are also some NOxxx macros defined in windows.h to exclude other parts of the API. (See What does #defining WIN32_LEAN_AND_MEAN exclude exactly?.)

To answer your second question, the /Gm Incremental Rebuild option will check if class definitions changed within a modified header to see if a source file needs to be rebuilt or not. This is in addition to and different from the normal timestamp checking performed by VS (and make). A Rebuild or Clean/Build will delete all the compiler generated files and rebuild everything, whether the files have changed or not. While the /MP flag does not work with /Gm (the compiler ignores /MP if they are both given), it will not prevent the timestamp based dependency checking that is used by make.

Community
  • 1
  • 1
1201ProgramAlarm
  • 32,384
  • 7
  • 42
  • 56