C programs are compiled quickly, or can be compiled quickly, at the expense of runtime performance of the generated executable.
Coding a C compiler which compiles quickly to slow running x86 executable is an easy exercise. Fabrice Bellard made TinyCC (less than 20k lines of C code). But in practice, you expect your C compiler (such as GCC) to be able to compile quite cleverly. This draft report of mine on Bismon give examples of clever optimizations. And that is why GCC is a ten millions lines of code monster.
The key concept is compiler optimization (loop unrolling, inline expansion, automatic vectorization, register allocation). It is an art, and as a problem it is unsolvable (because of Rice's theorem). Read the Dragon book for an introduction to compiler issues.
Famous mathematical problems can be reformulated as compiler optimization problems. Look on what Julia Robinson worked.
C++14 is slightly different to compile. Its standard C++ library define containers, which are difficult to compile, because template expansion in C++ is Turing complete. So some C++ weird programs are short but can take an unreasonable amount of time to be compiled.
Observe also that standard C++ headers are quite big: a simple #include <vector>
expands to nearly ten thousands lines on my Linux GCC 9.
Future (C++20 perhaps) versions of C++ might have modules.
And you could enable link time optimizations (with GCC, compile and link with gcc -O2 -flto
), which basically compile your code twice, grossly speaking.
Some large C or C++ programs can take hours of CPU to build. Google proprietary indexing code is rumored to be a single ELF executable compiled from more than 800 millions of C++ lines. Oracle database products are rumored to be half a billions lines of C++.
Some large systems have a lot of lines: for example, all the source code in a typical Linux distribution is about twenty billions lines of code (half of them being C or C++).
I am 60 years old, and old enough to remember when the C code I wrote alone took an hour of compile time.
With metaprogramming techniques, you can get a lot of "emitted" C code from a few thousand lines of input.
Cognitive science teaches us that a software developer lost focus (e.g. when thinking of a bug) in a few minutes, or even a few seconds.
Compiling the entire Linux kernel takes several minutes on a powerful desktop. Compiling the entire Qt toolkit (in C++), or the source of GCC 9, from the distributed source tarball takes several hours. Compiling Firefox can take more than a day.
if the compilation time is only a few milliseconds?
That never happened to me, except for hello world programs. And my desktop is a powerful AMD2970SX running Debian/Unstable. Even a small program like refpersys (6000 lines of C++ today december 19th 2019, git commit b1af17cb5e693efad0
) take 3.2 seconds to build (using omake -j 10
)
Please teach me how a small program like RefPerSys could be built in milliseconds.