4

In a number of situations as a programmer, I've found that my compile times are slower than I would like, and I want to understand the reason and fix them. Particular language (yes, I'm using C/C++) tricks have already been discussed, and we apply many of them. I've also seen this question and realize it's related. What I'm more interested in is what tools people use to diagnose hardware/system bottlenecks in the build process. Is there a standard way to prove "Disk read/writes are too slow for our builds - we need SSD's!" or "The anti-virus settings are killing our build times!", etc...?

Resources I've found, none directly related to compiling performance diagnosis:

  • A TechNet article about using PerfMon (Quite good, close to what I'd like)
  • This IBM link detailing some PerfMon information, but it's not specific to compiling and appears somewhat out of date.
  • A webpage specifically describing diagnosis of avg disk queue length

Currently, diagnosing a slow build is very much an art, and my tools of choice are:

What do others do to diagnose system-level build performance bottlenecks? Can we come up with a list of PerfMon or Process Explorer statistics to watch for, with thresholds for whats "acceptable" on a modern machine?

PerfMon:

  • CPU -> % of processor time
  • MEMORY -> Page/sec
  • DISK -> Avg. disk queue length

Process Explorer:

  • CPU -> CPU
  • DISK -> I/O Delta Total
  • MEMORY -> Page Faults
Community
  • 1
  • 1
Joe Schneider
  • 9,179
  • 7
  • 42
  • 59
  • This still strikes me as a question that would probably get more (and more useful) answers on serverfault.com. System administrators tend to know quite a lot about measuring loads and tuning machines to fit those loads. Your load being a compiler shouldn't matter all that much. – Jerry Coffin Dec 18 '09 at 17:06
  • @Jerry you may be right about sysadmins having that kind of knowledge, but it's still a programming question not a sysadmin question. Later, when someone else is searching for help on a similar topic, will they look on SO or SF? – Jay Bazuzi Feb 27 '10 at 10:45

2 Answers2

0

I resolved a "too slow build" time issue with Eclipse and Spring recently. For me the solution was to use the Vista Resource Monitor (which identified CPU spiking but not consistently high) and quite a bit of disk activity. I then used Procmon from Sysinternals to identify exactly which files were being heavily accessed.

Part of our build process also involves checking external Maven (binary file) repositories for updates every build. I disabled that check (which also gives me perfect control over when I update those dependencies). If you have resources external to the build machine, benchmark how long it takes to access them (source control, maven, etc.).

Since I'm stuck on 32-bit Vista for now, I decided to try creating a Ramdisk with the 700MB of non-addressable memory (PC has 4GB, Vista only exposes 3.3GB) and place the heavily accessed files as identified by Procmon on the Ramdisk using a nice trick of creating drive junctions to make that move transparent to my IDE. For details see here.

Community
  • 1
  • 1
Eric J.
  • 147,927
  • 63
  • 340
  • 553
0

I have used filemon to see the header files that a C++ build was most often opening then used:

  • “#ifndef checks” so header files are only included once
  • Precompiled headers
  • Combined some small header files
  • Reduce the number of header files included by other header files by tidying up the code.

However these days I would start with a RamDisk and or SSD, but opening lot of header files still uses lots of CPU time.

Ian Ringrose
  • 51,220
  • 55
  • 213
  • 317