From the documentation for GNU make: http://www.gnu.org/software/make/manual/make.html#Parallel
When the system is heavily loaded, you will probably want to run fewer jobs than when it is lightly loaded. You can use the ‘-l’ option to tell make to limit the number of jobs to run at once, based on the load average. The ‘-l’ or ‘--max-load’ option is followed by a floating-point number. For example,
-l 2.5
will not let make start more than one job if the load average is above 2.5. The ‘-l’ option with no following number removes the load limit, if one was given with a previous ‘-l’ option.
More precisely, when make goes to start up a job, and it already has at least one job running, it checks the current load average; if it is not lower than the limit given with ‘-l’, make waits until the load average goes below that limit, or until all the other jobs finish.
From the Linux man page for uptime: http://www.unix.com/man-page/Linux/1/uptime/
System load averages is the average number of processes that are either in a runnable or uninterruptable state. A process in a runnable state is either using the CPU or waiting to use the CPU. A process in uninterruptable state is waiting for some I/O access, eg waiting for disk. The averages are taken over the three time intervals. Load averages are not normalized for the number of CPUs in a system, so a load average of 1 means a single CPU system is loaded all the time while on a 4 CPU system it means it was idle 75% of the time.
I have a parallel makefile and I want to do the obvious thing: have make to keep adding processes until I am getting full CPU usage but I'm not inducing thrashing.
Many (all?) machines today are multicore, so that means that the load average is not the number make should be checking, as that number needs to be adjusted for the number of cores.
Does this mean that the --max-load
(aka -l
) flag to GNU make is now useless? What are people doing who are running parallel makefiles on multicore machines?