16

In Make this flag exists:

-l [load], --load-average[=load] Specifies that no new jobs (commands) should be started if there are others jobs running and the load average is at least load (a floating-point number). With no argument, removes a previous load limit.

Do you have a good strategy for what value to use for the load limit ? It seems to differ a lot between my machines.

oHo
  • 51,447
  • 27
  • 165
  • 200
Zitrax
  • 19,036
  • 20
  • 88
  • 110

2 Answers2

8

Acceptable load depends on the number of CPU cores. If there is one core, than load average more than 1 is overload. If there are four cores, than load average of more than four is overload.

People often just specify the number of cores using -j switch.

See some empirical numbers here: https://stackoverflow.com/a/17749621/412080

Community
  • 1
  • 1
Maxim Egorushkin
  • 131,725
  • 17
  • 180
  • 271
  • 3
    Experimentally though I noticed on one machine while timing different use of -j I got the fastest compiles using -j8 on a 4 core machine. – Zitrax May 28 '11 at 14:25
  • 2
    I have similar experience. Probably because there are some I/O-bound stages of compilation, so that when one compiler process blocks for I/O another one can use that time for compiling. – Maxim Egorushkin May 31 '11 at 11:15
7

I recommend against using the -l option.

In principle, -l seems superior to -j. -j says, start this many jobs. -l says, make sure this many jobs are running. Often, those are almost the same thing, but when you have I/O bound jobs are other oddities, then -l should be better.

That said, the concept of load average is a bit dubious. It is necessarily a sampling of what goes on on the system. So if you run make -j -l N (for some N) and you have a well-written makefile, then make will immediately start a large number of jobs and run out of file descriptors or memory before even the first sample of the system load can be taken. Also, the accounting of the load average differs across operating systems, and some obscure ones don't have it at all.

In practice, you'll be as well off using -j and will have less headaches. To get more performance out of the build, tune your makefiles, play with compiler options, and use ccache or similar.

(I suspect the original reason for the -l option stems from a time when multiple processors were rare and I/O was really slow.)

Peter Eisentraut
  • 35,221
  • 12
  • 85
  • 90
  • 3
    Have you actually tried this lately? As of GNU make 3.81 (released in 2006) GNU make implements an algorithm to adjust its idea of the system load average based on the number of jobs invoked by make within the last second. This isn't ideal of course since it's just a guess, but it should keep 100's of jobs from being invoked right away when make starts. – MadScientist Mar 11 '15 at 18:45
  • 1
    I don't know how much smarter it has gotten internally, but the behavior that it is going to start a lot of jobs right away is still easily reproducible. – Peter Eisentraut Mar 13 '15 at 13:47
  • 3
    Unfortunately, in my experience you also need -l. The issue is that you're often going to have multiple complex make systems that don't necessarily know about each other, so the mechanism that shares job counts isn't always useful. (Try things like builds that include openssl, dbus, curl, and a dozen other libraries for Android and iOS; you're going to be kicking off builds that aren't really part of your own build system.) – James Moore Jan 08 '18 at 17:45