1

When executing make -jn, with n > m, where m is the number of physically available CPUs, what happens with those n - m jobs? Are only m jobs dispatched for execution to the OS and the remaining n - m are queued by make, or are all the n jobs dispatched and the OS is context-switching between them?

Ruup
  • 177
  • 2
  • 8

1 Answers1

3

The latter: all the n jobs dispatched and the OS is context-switching between them.

Linux normally tries to juggle processes in a fair fashion, so that if you compile a large project with make -j (without the process count limit) that is likely to swamp your machine with processes and bring it to halt (due to swapping because all processes do not fit into RAM).

Solaris 10 did not suffer from that.

When compiling on Linux with build systems that scale linearly with the number of jobs (e.g. non-recursive make) a rule of thumb is to not use more jobs than logical CPUs (i.e. a hyper-threaded core is a logical CPU). A bit of empirical data.

Community
  • 1
  • 1
Maxim Egorushkin
  • 131,725
  • 17
  • 180
  • 271
  • 1
    +1 ! Did that with some large project (Opencv), crashed the machine... Lesson learned: use `-j4` or `-j8` – kebs Mar 04 '16 at 18:34