12

I am building webkit ( 2 Million lines of code) after every ten minutes to see the output of my change in it, and linking of webkit on my Machine requires to process 600-700 MB of object files which are there on my hard-disk. That takes around 1.5 minutes. I want to speedup this linking process.

Is there any chance that, I can tell os to keep all the object files in RAM only ( I have 4 GB of ram ). Is there any other way to speed up the linking?

Any ideas or help is appreciated!

Here is a command which takes 1.5 minutes,

http://pastebin.com/GtaggkSc

SunnyShah
  • 28,934
  • 30
  • 90
  • 137
  • Can you confirm that with e.g. top or vmstat? The data should aleady be in your cache, but the linker will also need some time to process it - so you might be CPU bound. – Turbo J Sep 16 '10 at 13:03
  • @Turbo J, I found that in my processor only one core is getting used for 22-25 percent. – SunnyShah Sep 16 '10 at 14:48

4 Answers4

18

I solved this problem by using tempfs and gold linker.

1). tmpfs: mount directory which contains all the object files as tmpfs.

2). gold linker: using gold linker will make linking 5-6 times fast, with tmpfs advantage speedup will be 7-8 times than normal linking. use following command on ubuntu and your normal linker will get replaced with Gold Linker.

sudo apt-get install binutils-gold

You can find some linking error using gold linker, below thread is a good help on it.

Replacing ld with gold - any experience?

Community
  • 1
  • 1
SunnyShah
  • 28,934
  • 30
  • 90
  • 137
  • As a replacement for tempfs, I would recommend vmtouch to simply check that your directory is being cached in RAM, and force the directory to be cached if that isn't sufficient. Eliminates a possible source of bugs and complexity. – cmc Apr 11 '18 at 19:24
2

Try to use a ramdisk

mmonem
  • 2,811
  • 2
  • 28
  • 38
  • 5
    Or, on a modern Linux system, a tmpfs is generally better. – MarkR Sep 12 '10 at 16:16
  • 1
    I run my linux in a VMWare image which lives on a ramdisk. With a special minimum linux configuration just for compiling. It's very very difficult otherwise to be sure that anything is in RAM. But this will require much more then 4GB, 16GB are minimum. – Lothar Sep 15 '10 at 16:33
1

Truthfully I'm not sure I understand the problem but would something like ramfs be of use to you?

Matt Briançon
  • 1,064
  • 16
  • 24
  • Thanks for comment, I clarified my question now. – SunnyShah Sep 12 '10 at 15:22
  • OK, so it seems like using ramfs (or ramdisk as suggested mmonem) would be useful to you provided that they allow you to create "disks" that are sufficiently large (disclaimer: I've never used either of the programs but I've heard tell of their usefulness). Copy the object files you require to the "disk" and point your linker to these files rather than those on your hard disk. Hope this helps. – Matt Briançon Sep 12 '10 at 15:52
1

Get a SSD Disk for your linux machine. If write performance is still a problem configure the output path to be in a ram disk.

Have you measured how much of the 1.5min is really IO bound? Webkit being so large means that you can run into memory cache trashing. You should try to find out how many L1/L2 cache misses you have. I would suggest that this is a problem. In this case your only hope is that someone at the GCC team looks into this problem.

By the way: Microsoft has the same problem with extreme linker times.

Lothar
  • 12,537
  • 6
  • 72
  • 121
  • Just looked at your pasted bin. You should really try to bundle single .o files into a .lib - this can be a problem. And try to use ReiserFS which is much better with small files then other filesystems. – Lothar Sep 16 '10 at 16:55