35

I have core i5 with 8gb RAM. I have VMware workstation 10.0.1 installed on my machine. I have fedora 20 Desktop Edition installed on VMware as guest OS.

I am working on Linux kernel source code v 3.14.1. I am developing an I/O scheduler for Linux kernel. After any modifications in code every time it takes around 1 hour and 30 minutes for compiling and installing the whole kernel code to see the changes.

Compilation and Installation commands: make menuconfig, make, make modules, make modules_install, make install

So my question is it possible to reduce 1 hour and 30 minutes time into only 10 to 15 minutes?

Mateusz Piotrowski
  • 8,029
  • 10
  • 53
  • 79
momersaleem
  • 451
  • 1
  • 4
  • 7
  • 2
    there's a little known `menu gconfig` that's unbelievably more convenient vs `make menuconfig` – Oleg Mikheev Dec 05 '16 at 01:05
  • 1
    I'd suggest getting more RAM as that will make things much faster. If you have super fast SSD (e.g. Intel Optane), the difference is not that big. I would suggest minumum of 16 GB for a kernel developer, especially if you use virtual machines. It was not clear to me if you compile the code on host or virtual client but whatever machine is doing the compiling needs to have lots of RAM to have all files in RAM all the time. If you have enough resources, you should learn about `distcc` and `ccache`, too. – Mikko Rantalainen Aug 26 '19 at 10:37
  • 1
    Also worth trying: `make localconfig` instead of `make menuconfig` to minimize the amount of modules and features you're building. – Mikko Rantalainen Aug 26 '19 at 10:39
  • 1
    Here's an article that gives build times around 90 seconds instead of 90 minutes: http://nickdesaulniers.github.io/blog/2018/06/02/speeding-up-linux-kernel-builds-with-ccache/ - that article does not make modules, though. – Mikko Rantalainen Aug 26 '19 at 10:41

10 Answers10

29

Do not do make menuconfig for every change you make to the sources, because it will trigger a full compilation of everything, no matter how trivial your change is. This is only needed when the configuration option of the kernel changes, and that should sheldom happen during your development.

Just do:

make

or if you prefer the parallel compilation:

make -j4

or whatever number of concurrent tasks you fancy.

Then the make install, etc. may be needed for deploying the recently built binaries, of course.

Another trick is to configure the kernel to the minimum needed for your tests. I've found that for many tasks a UML compilation (User Mode Linux) is the fastest. You may also find useful make localmodconfig instead of make menuconfig to start with.

rodrigo
  • 94,151
  • 12
  • 143
  • 190
  • 4
    the `-j` argument should be 1.5x the number of cores – noɥʇʎԀʎzɐɹƆ Jul 26 '17 at 20:31
  • The best results are often achieved using the number of CPU cores in the machine + 1; for example, with a 2-core processor run make -j3 – debug Mar 21 '19 at 07:01
  • I generally use `make -j$(( $(nproc) * 2 ))` and pass CFLAGS options like mtune=native, march=native, O3, fno-plt, pipe, etc. to make the kernel even faster. It's proven to produce the better binaries. And on my intel i3 haswell 3.5 GHz desktop processor, it takes ~1 hour to complete the compilation with GCC 10.2.0. For Xanmod and liquorix kernels though, the compilation time is no lesser than 2 and half hours... – 15 Volts Sep 21 '20 at 11:51
  • @noɥʇʎԀʎzɐɹƆ I don't know how you arrived at that multiplier but it has been working great for me. – Hritik May 19 '21 at 17:31
11
  1. Use make parallel build with -j option
  2. Compile for the target architecture only, since otherwise make will build the kernel for every listed architecture.

i.e. for eg instead of running:

make

run:

make ARCH=<your architecture> -jN

where N is the no of cores on your machine (cat /proc/cpuinfo lists the no of cores). For eg, for i386 target and host machine with 4 cores (output of cat /proc/cpuinfo):

make ARCH=i386 -j4

Similarly you can run the other make targets (modules, modules_install, install) with -jN flag.

Note: make does a check of the files modified and compiles only those files which have been modified so only the initial build should take time, subsequent builds will be faster.

brokenfoot
  • 11,083
  • 10
  • 59
  • 80
  • Sorry as I am bit new to kernel things so could you please elaborate more about "make ARCH= -jN". And yes you are right make only compile the modified code which take around 10 minutes but "make modules_install" commands takes a about 45 minutes to make .ko files. Any comments please? – momersaleem Apr 24 '14 at 20:43
  • Say if the target machine is `i386` and your machine has `4 cores`, then you can run your make as : `make ARCH=i386 -j4`. – brokenfoot Apr 24 '14 at 20:46
  • 3
    `make ARCH=$(arch) -j$(nproc)` to be general – deadLock Oct 25 '20 at 07:42
  • ARCH= is of no use in this case. See https://docs.kernel.org/kbuild/makefiles.html#kbuild-variables. It is only useful when you are doing a cross compile. – Yutsing May 02 '23 at 01:37
6

make -j will make use of all available CPUs.

Eric
  • 22,183
  • 20
  • 145
  • 196
  • 5
    it spawns hundreds of compiler processes and the system freezes – Oleg Mikheev Dec 12 '16 at 19:57
  • 2
    @OlegMikheev Didn't see this in previous program, maybe you can specify a job count limit if that happen, e.g `make -j 4` as suggested in other answers. – Eric Dec 13 '16 at 05:36
  • 1
    Regarding the original answer: to make use of all available cpu cores: `make -j$(nproc)` – garritfra May 29 '20 at 11:35
  • 1
    @garritfra The man page says `If the -j option is given without an argument, make will not limit the number of jobs that can run simultaneously.` so guess that means make use of all cpus. – Eric May 29 '20 at 15:52
  • 2
    You're right, it doesn't. But at the same time it means that it's not limited to the number of cores, hence it continuously spawns processes, which freezes up your system. – garritfra May 30 '20 at 16:14
  • @garritfra I guess make will take care of the number of jobs to run. and the os itself has a fair and smart task scheduler, so even without -j option, I think it's safe. – Eric May 31 '20 at 04:28
4

You do not need to run make menuconfig again every time you make a change — it is only needed once to create the kernel .config file. (Or possibly again if you edit Kconfig files to add or modify configuration options, but this certainly shouldn't be happening often.)

So long as your .config is left alone, running make should only recompile files that you changed. There are a few files that must be compiled every time, but the vast majority are not.

  • Thanks for your comments. As you said run "make menuconfig" only once. That's fine. "make" command will be run every time after any change in code obviously. But is it also necessary to run "make modules", "make modules_install" and "make install" every time after any change in code to see my changes? – momersaleem Apr 26 '14 at 09:28
  • 1
    `make modules` builds the modules, `make modules_install` will install the modules into their default directory which is `/lib/{your-kernel-version-name}/modules` and `make install` "installs" the new kernel, as in copying the compressed kernel image into the default location, typically /boot/ along with the System.map and the .config files. So coming back to your question, if you made some changes into the modules, it sure is needed. Since you're apparently working on the scheduler, you most probably working with the core kernel, so no need to do a `make modules` since it won't do anything – AjB Mar 15 '15 at 16:28
3

ccache should be able to dramatically speed up your compile times. It speeds up recompilation by caching previous compilations and detecting when the same compilation is being done again. Your first compilation with ccache will be slower since it needs to populate the cache, but subsequent builds should be much faster.

If you don't want to fuss with ccache configurations you can just run it like so to compile the kernel:

ccache make
Elias
  • 1,367
  • 11
  • 25
  • Interesting, I didn't know it was possible to use `ccache` like this instead of the more comon `make CC='ccache gcc'`: https://stackoverflow.com/questions/9757436/how-to-use-ccache-with-make/52335011#52335011 I tried it out on a minimal example that just prints `echo which $(CC)` and it pointed to `/usr/bin/cc` so it seems it was not used. Can you provide a minimal example that shows this works, or some documentation supporting it? – Ciro Santilli OurBigBook.com Sep 30 '18 at 09:21
1

Perhaps in addition to the previous suggestions, while using ccache, you might want to unset CONFIG_GCC_PLUGINS (if it was set) otherwise you may get a lot of cache misses, as seen in this example.

0

Perhaps in addition to the previous suggestions, using ccache software (https://ccache.samba.org/) and a compilation directory on SSD drive should drastically decrease the compilation time.

a.grochmal
  • 11
  • 3
0

If you have suffitient RAM and you wont be using your machine while the kernel is being built u can spawn a large number of concurrent jobs. But make sure your RAM is sufficient otherwise your system will hang and crash.

sohom154
  • 1
  • 1
  • 2
0

Use this command:

sudo make -j 4 && sudo make modules_install -j 4 && sudo make install -j 4

Where 4 is the number of cores I have alloted to working on this process.

Credits

yudhiesh
  • 6,383
  • 3
  • 16
  • 49
-2

Simple trick. If you don't use your own machine or have another one, you can log out completely and switch to a TTY terminal using CTRL + ALT + F*. Everything is much much faster.

Max_Payne
  • 142
  • 1
  • 1
  • 12
  • Unless your login environment has some unreasonably demanding application running in the background, or you have an unreasonably small amount of memory then this is not going to make anything "much much faster". – Bracken Nov 08 '21 at 17:28
  • @Bracken gnome-shell is a "demanding application". That's why it works, check it first... – Max_Payne Nov 08 '21 at 18:50
  • Gnome shell is not demanding, it uses a tiny fraction of my system's CPU time and memory. That's why this makes no appreciable difference to compile time. If your shell is using significant resources then either you have a problem, or your machine is weaker than a raspberry pi. – Bracken Nov 09 '21 at 10:22
  • @Bracken leave aside the gnome shell. It was an example and is actually a demanding app on a normal desktop. I talked about logout in general, which will unload a lot of apps and system stuff and definitely speed up the kernel compilation. Good luck. – Max_Payne Nov 09 '21 at 11:46