We have a very large program here which mixes C++ and FORTRAN (sorry). One of my check-ins has resulted in a dramatic slowdown of the complete application (i.e. by a factor of two or more) - even in areas of the code which are not affected by my changes.
The facts:
Almost every module has slowed by a similar amount - even ones which use none of my code.
The executable is about 6% bigger.
The metadata has not been changed between check-ins.
The IDE/compiler is VS2010 in Release mode.
Some .lib files have doubled or tripled in size.
I looked at one of the .lib files which has tripled in size, and there are only two changes:
a) I have included a large-ish header file which in turn includes many others - some of which contain moderately complicated inline code. The 'Additional Include Directories' has gone from none-or-one to about 7, as each header file #includes one or more others.
b) I have called 4 functions from out of this header file, but these are not called during the run that has slowed down (i.e. their execution cannot be slowing the code down, but their inclusion might conceivably be).
In spite of searching the forums as to whether including header files slows down execution (as opposed to compilation), I can't find a single relevant article. My questions are:
? Does the #inclusion of any form of header (declaration or inline) slow down the code execution?
? Is there are qualitative or quantitative difference in the inclusion of inline code w.r.t. execution speed (I know that 'inline' is only advice to the compiler)?
? What are the correlations between .lib size, .exe size and execution speed (I'm expecting lots of different and contradictory correlations here)?
? Will refactoring some of the header files such that they don't need to include others (by putting these includes into a .cpp file, and thus reducing my 'Additional Include Directories') improve my situation, do you think?
I guess the last question is the meat of the issue, as it will take a lot of effort...