I am writing a compiler that generates C++ code at the end. I am currently wondering what is compiling faster.
First notes about the compiler:
- I don't have any classes\structs, they are optimized inside the functions.
- I don't include anything like
#include <vector>
, when I have to use functions such asprintf
from libraries then I manually put prototype. (The compiler do it manually.)
I have two options:
Option #1:
//.h
#include "A.h"
#include "B.h"
int function( /* parameters */ );
//.cpp
int function( /* parameters */ ) {
// code
}
Every function has it's own source and header. Advantages:
- I can make the compiler comment out the includes that includes file that are included before it. For example if
#include "B.h"
's content is included in#include "A.h"
then I will can make it comment out the line#include "B.h"
. (Saves file reads.) - I can recognize unchanged methods/functions/files (When I regenerate my code and it can find exact files from before.) and recycle their object files. (Saves object compiling.)
Option #2:
int function( /* parameters */ );
int function2( /* parameters */ );
int function3( /* parameters */ );
// ...
int function( /* parameters */ ) {
// code
}
// ...
All functions are once defined (those prototypes at the top) and compiled in that single file.
Advantages:
- Single sequential read from the disk. (No hierarchy of including and multiple including from different objects.)
- Single object to compile, excluding libraries.
At single glance the looks like option #1 is faster, but some folks said they tried the second and it gave their project a boost in compiling time. They didn't compare both options, and didn't gave any proof for it.
Can I get explanation for which one is faster rather than benchmarks?