7

Code amalgamation consists in copying the whole source code in one single file.

For instance, it is done by SQLite to reduce the compile time and increase the performances of the resulting executable. Here, it results in one file of 184K lines of code.

My question is not about compile time (already answered in this question), but about the efficiency of the executable.

SQLite developers say:

In addition to making SQLite easier to incorporate into other projects, the amalgamation also makes it run faster. Many compilers are able to do additional optimizations on code when it is contained with in a single translation unit such as it is in the amalgamation. We have measured performance improvements of between 5 and 10% when we use the amalgamation to compile SQLite rather than individual source files. The downside of this is that the additional optimizations often take the form of function inlining which tends to make the size of the resulting binary image larger.

From what I understood, this is due to interprocedural optimization (IPO), an optimization made by the compiler.

GCC developers also say this (thanks @nwp for the link):

The compiler performs optimization based on the knowledge it has of the program. Compiling multiple files at once to a single output file mode allows the compiler to use information gained from all of the files when compiling each of them.

But they do not speak about the eventual gain of this.

Are there any measurements, apart from those of SQLite, which confirm or refute the claim that IPO with amalgamation produces faster executables than IPO without amalgamation when compiled with gcc?

As a side question, is it the same thing to do code amalgamation or to #include all the .cpp (or .c) files into one single file regarding this optimization?

Community
  • 1
  • 1
Tom Cornebize
  • 1,362
  • 15
  • 33
  • There is no language "C/C++". C++ support additional programming techniques (let apart the different semantics for identical syntax/grammar) which are relevant here. – too honest for this site Aug 11 '16 at 14:22
  • 2
    Oh, and this is a discussion-style question. It is not suited for this site. – too honest for this site Aug 11 '16 at 14:23
  • 1
    The compilation speed advantage is dubious, since you can't do incremental builds. The result of the compilation need not be significantly better either, if the toolchain allows good link-time optimizations. – EOF Aug 11 '16 at 14:24
  • 5
    @Olaf The meaning of "C/C++" here is "the languages C or C++", not "the language C/C++". Which site would be better for such questions according to you? Since there are already related questions, I thought this was the good site. – Tom Cornebize Aug 11 '16 at 14:29
  • 1
    The cons you mention apply also for multi-file source distribution (A has to be recompiled + released to incorporate new version of B). As an "A" app creator you often have little control about deployment environment, so either you produce a very clever installer which handles the dependencies and installs additional SW as needed (linux package managers like `dpkg` on Debian), or you have to distribute everything together with "A". The difference between compile time dependencie and bundled dependency is, that by installing the first one you pollute the host less and risk less collisions. – Ped7g Aug 11 '16 at 14:30
  • That's also reason, why in ecosystems where strong packaging authority exists (Debian for example), the suggested way for app creators is to *NOT* bundle 3rd party libraries with the product, and use the system provided variants from global repository. When I was using windows (10y back), the term "dll hell" was used a lot, leading usually to deploying any app with required .dll files (in expected version) bundled together with the .exe, not bothering with the system-wide installed ones. – Ped7g Aug 11 '16 at 14:33
  • Note that the answers about higher performance from a single file are from 2009. Things have changed since then. A lot! – Bo Persson Aug 11 '16 at 15:09
  • 1
    @Olaf I believe this question is not "primarily opinion-based". I edited my question to better reflect this. I am looking for facts about the supposed performance improvement as well as eventual solutions for the software engineering issue. Could you please reopen it? – Tom Cornebize Aug 11 '16 at 16:24
  • 1) it is not only me to have CVd, so why ask me? 2) I actually voted as too broad. Which still holds. 3) It still could also be seen as opiniated. – too honest for this site Aug 11 '16 at 17:47
  • @Olaf Nothing personal, you are just the first person on the list, that's why. Maybe I should split into several questions (e.g. one for the supposed performance gain, and one for the software engineering)? – Tom Cornebize Aug 11 '16 at 17:52
  • Last comment: This is missplaced on stack overflow anyway. You really should ask this on a discussion-forum. This is a Q&A site. Anyway, do more research **first**. Maybe you just should get more practice, then you might see the answer yourself. Sometimes it is not good to ask things in advance, they will resolve themself someday. – too honest for this site Aug 11 '16 at 17:55
  • 1
    @EOF My question is still marked as "primarily opinion based", although it is clearly not the case anymore. Can you please reopen it? – Tom Cornebize Aug 12 '16 at 05:34
  • 2
    Saw this while looking for a tool to create an amalgamation of header files for a library. I'm not interested in performance benefits; just portability of the include file. It's 100% true that in, at least older, C++ compilers, you could get more optimized code if you created an amalgamation. Why do you have to declare functions before they are used? Newer compilers/generators, that don't require you to declare functions, like C#, probably get less benefit. But, they are taking two passes over your source at build time. Today, the question is probably more academic than practical. – Michael T Oct 14 '20 at 23:28

1 Answers1

3

The organization of the source-code files will not "produce a more efficient binary," and the speed of retrieving from multiple source files is negligible.

A version control system will take deltas of any file regardless of size.

Ordinarily, separate components such as these are separately compiled to produce binary libraries containing the associated object code: the source code is not recompiled each time. When an "application A" uses a "library B" that is changed, then "application A" must be re-linked but it does not have to be recompiled if the library's API has not changed.

And, in terms of the library itself, if it consists of (hundreds of) separate source-files, only the files that have been changed have to be recompiled before the library is re-linked. (Any Makefile will do this.) If the source-code were "one huge thing," you'd have to recompile all of it every time, and that could take a long time ... basically, a waste of time.

There are two ways in which the object-code from a library (once it has been built ...) can be incorporated into an executable: static linking, and dynamic. If static linking is used, the necessary parts of the library will be copied into the executable ... but, not all of it. The library-file does not have to be present when the executable is run.

If dynamic linking is used, the entire library exists in a separate file (e.g. .DLL or .so) which does have to be present at runtime but which will be shared by every application that is using it at the same time.

I recommend that you primarily view this as a source-code management issue, not as something that will confer any sort of technical or runtime advantages. (It will not.) I find it difficult to see a compelling reason to do this at all.

Mike Robinson
  • 8,490
  • 5
  • 28
  • 41
  • 4
    I don't agree on the negligible runtime advantages. There are plenty of C-compilers that don't support LTO. Unused functions, inlined functions and other optimizations will be made possible for these compilers by compiling all code in one file. – CodeMonkey Aug 11 '16 at 14:37
  • But if every application has its own copy of the library source code, then they will all have their own `*.so` file, no? So such practice remove one of the advantages of dynamic libraries. – Tom Cornebize Aug 11 '16 at 14:37
  • compilation itself is sometimes faster and simpler, handling just single .h + .cpp, than having another 1200 .cpp files in big folder tree, having to provide reasonable makefile rules to build it. Actually the build rules are probably the best reason, why integrating single .cpp into your project is simpler, than building the library from regular multi-file source. If you integrate only pre-compiled .so/.a lib, then there's no advantage? Just #include single .h is easier, but that's common practice to have whole public API of library defined trough single main header file. – Ped7g Aug 11 '16 at 14:38
  • If every library has a copy of identical source-code, then it will redundantly compile that source-code into an object-code (`.a` or `.obj`) file, which will then be linked along with all the rest of the executable. However, redundant compilation is really unnecessary: just put the thing into a static library and reference it in the project's Makefile. – Mike Robinson Aug 11 '16 at 14:41
  • @TomCornebize the "advantages of dynamic libraries" can be viewed as disadvantages in many scenarios. From security point of view the advantages are huge, yet the breakage of application due to library receiving security fix may in the end cause more damage, than keeping working app with security hole (then again, anyone running such critical app just blindly updating libraries deserves such damages). But for example games often depend on particular version of library, with all it's quirks and bugs, to behave properly. In such situation the dynamic update is just annoying, w/o any advantage. – Ped7g Aug 11 '16 at 14:42
  • 2
    Remember also that you run a serious risk of having source-files (big or small) that are *out-of-sync* with others. Oopsie, a source-code change didn't get applied to *every copy* of that enormous-source-file. You get the idea ... Like I said, "a source-code management issue." – Mike Robinson Aug 11 '16 at 14:42
  • 1
    Static libraries are really very nice, including "from a security point of view," because the necessary parts of it are found and brought-together *by the linker,* and incorporated directly into the executable. It isn't all-or-nothing: the linker is smart enough to pick-and-choose what it needs. It all goes into the executable at that time. "Now, code-sign that puppy and you've got something that can't be tampered-with. It is *self-contained."* – Mike Robinson Aug 11 '16 at 14:45
  • @TomCornebize: Do you know that `.so` mean "**shared** object"? Think about the implications. – too honest for this site Aug 11 '16 at 14:48
  • @Olaf I am sorry, I do not understand your comment. If an application has its own copy of a library source code, then it will have its own shared object. – Tom Cornebize Aug 11 '16 at 14:58
  • 1
    @TomCornebize: Then it is not shared, isn't it? "to share" means a library is shared between different programs. There is little sense in generating a "shared library" for just one application. – too honest for this site Aug 11 '16 at 15:01
  • @MikeRobinson: This is strongly disputable. While an application does not depend on the library being updated if an issue is fixed, otoh a single library update can fix hundrets of applications with a shared library. So, no, there is no single binary position. – too honest for this site Aug 11 '16 at 15:03
  • The difference between a static library and a dynamic ("shared") library is *how and when* the object-code is made available to the application. A **static** library consists of compiler-output which has been linked to implement a set of public routines. The linker *selectively* retrieves only those portions needed, and incorporates them into the executable file. A **dynamic** library is the familiar `.DLL` or `.so` which is accessed *at runtime* and shared among all those who presently do so. The entirety of the dynlib is loaded into read-only shared memory segments. – Mike Robinson Aug 11 '16 at 15:49
  • 1
    @Olaf, when a **static** library is updated, all applications which reference it must then be re-linked. The overhead of "(re)compiling the source-code" is avoided altogether, but the obligation to re-link the affected executables is not. This fundamentally alters the executables and replaces some of the object-code contained in them. *Per contra,* when a **dynamic** library is updated, the single file that contains it is merely replaced. The executables which use the library do not require recompilation (unless the API has changed), since they *contain* none of its object-code. – Mike Robinson Aug 11 '16 at 15:51
  • @MikeRobinson: I don't think I need a lesson about linking, loading and the difference between static and dynamic libraries. Actually this difference is quite artificial; one could easily have a single libry which can be statically linked as well as dynamically. Also note that modern OS go one step further. They actually delay loading and relocation until the library is used (or even partitions that further). Re. re-compilation: that is nonsense, as it is the general idea of libraries, resp. module ("compilation units" in C) not to recompile modules which don't need that. – too honest for this site Aug 11 '16 at 17:52
  • @Olaf, I sincerely and publicly apologize to you if you think that I was "talking down" to you. ## I am personally not aware of a single library format that can *both* be a dynamically-loaded library *and* a linker-input from which object-code is extracted. – Mike Robinson Aug 11 '16 at 19:53
  • Well, Linux for instance uses ELF. On embedded systems, typically the linker is also the loader, i.e. it relocates the code to the run-time addresses. And what if you just call this linker right at program start? – too honest for this site Aug 11 '16 at 20:23
  • I am intending the term, "linker," specifically to mean the final stage after all source-modules have been compiled into their object-code modules, when all of the necessary object-code is bound together to become a self-contained executable. It is not relevant to say, "call it at program start," because this is how "the program" (executable) is *constructed.* However, I think we're beginning to "chat" here, so let's just call it a day. – Mike Robinson Aug 11 '16 at 22:46