1

The reason we have to define inline functions in the header is that each compilation unit where that function is called must have the entire definition in order to replace the call, or substitute it. My question is why are we forced to put a definition in a header file if the compiler can and does do its own optimisations of inlining, which would require it to dig into the cpp files where the functions are defined anyway.

In other words, the compiler seems to me to have the ability to see the function "declaration" in a header file, go to the corresponding cpp file and pull the definition from it and paste it in the appropriate spot in the other cpp. Given that this is the case, why the insistence of defining the function in the header, implying as if the compiler can't "see" into other cpp files.

The MSDN says about Ob2/ optimisation setting:

Ob2/ The default value. Allows expansion of functions marked as inline, __inline, or __forceinline, and any other function that the compiler chooses (My emphasis).

Zebrafish
  • 11,682
  • 3
  • 43
  • 119
  • 1
    There is no "corresponding cpp file" - the function definition can be in any source file, or in any object file or library that's going to be linked to the program later. In other words, the case is not given at all. – molbdnilo May 11 '16 at 13:22
  • @molbdnilo I see, sorry, I was wrong to say "corresponding cpp file". I'm still wondering how the compiler can inline functions as it sees fit without having the inline keyword. For example MSDN says: "/Ob2. The default value. Allows expansion of functions marked as inline, __inline, or __forceinline, and any other function that the compiler chooses. " This is at compiling stage I'm guessing, not at linker stage. – Zebrafish May 11 '16 at 13:39

4 Answers4

4

The reason we're forced to provide definitions of inline function in header files (or at least, in some form that is visible to the implementation when inlining a function in a given compilation unit) is requirements of the C++ standard.

However, the standard does not go out of its way to prevent implementations (e.g. the toolchain or parts of it, such as the preprocessor, compiler proper, linker, etc) from doing things a little smarter.

Some particular implementations do things a little smarter, so can actually inline functions even in circumstances where they are not visible to the compiler. For example, in a basic "compile all the source files then link" toolchain, a smart linker may realise that a function is small and only called a few times, and elect to (in effect) inline it, even if the points where inlining occurs were not visible to the compiler (e.g. because the statements that called the functions were in separate compilation units, the function itself is in another compilation unit) so the compiler would not do inlining.

The thing is, the standard does not prevent an implementation from doing that. It simply states the minimum set of requirements for behaviour of ALL implementations.

Essentially, the requirement that the compiler have visibility of a function to be inlined is the minimum requirement from the standard. If a program is written in that way (e.g. all functions to be inlined are defined in their header file) then the standard guarantees that it will work with every (standard compliant) implementation.

But what does this mean for our smarter tool-chain? The smarter tool-chain must produce correct results from a program that is well-formed - including one that defines inlined functions in every compilation unit which uses those functions. Our toolchain is permitted to do things smarter (e.g. peeking between compilation units) but, if code is written in a way that REQUIRES such smarter behaviour (e.g. that a compiler peek between compilation units) that code may be rejected by another toolchain.

In the end, every C++ implementation (the toolchain, standard library, etc) is required to comply with requirements of the C++ standard. The reverse is not true - one implementation may do things smarter than the standard requires, but that doesn't generate a requirement that some other implementation do things in a compatible way.

Technically, inlining is not limited to being a function of the compiler. It may happen in the compiler or the linker. It may also happen at run time - for example "Just In Time" technology can, in effect, restructure executable code after it has been run a few times in order to enhance subsequent performance [this typically occurs in a virtual machine environment, which permits the benefits of such techniques while avoiding problems associated with self-modifying executables].

Peter
  • 35,646
  • 4
  • 32
  • 74
  • Excellent answer and very educational. The way I understand it is that a compiler and linker may go extra steps even if C++ language standard hasn't been met, in this case providing a FULL definition in a header. So inlining optimisations can be done, using your language, by "peeking" into compilation units either by the compiler or linker. It's funny though, because best practices say declarations in header, and definitions in CPP, but inlining goes against this, and it's a practice I've gotten used to. Thanks. – Zebrafish May 11 '16 at 15:24
  • inlining at link time requires the full smarts of the compiler. It's not something a normal linker can just do. Link-time optimization exists, but it works by storing the compiler's intermediate representation into object files, not by reading the final asm and inlining based on that. (This is the case for gcc and clang at least, and I assume other compilers that can do LTO.) – Peter Cordes May 11 '16 at 23:46
3

No, compilers traditionally can't do this. In classic model, compiler 'sees' only one cpp file at a time, and can't go to any other cpp files. Out of this cpp file compiler so-called object file in platofirm native format, which is than linked using effectively linker from 1970s, which is as dumb as a hammer.

This model is slowly evolving. With more and more effective link-time optimizations (LTO) linkers become aware of what cpp code is, and can perform their own inlining. However, even with link-time optimization model compiler-done inlining and optimization are still way more efficient than link-time - a lot of important context is lost when cpp code is converted to intermediate format suitable for linking.

SergeyA
  • 61,605
  • 5
  • 78
  • 137
  • I see. The default optimisation for Release mode in Visual studio is documented as: "/Ob2. The default value. Allows expansion of functions marked as inline, __inline, or __forceinline, and any other function that the compiler chooses. " So it doesn't do its own inlining? – Zebrafish May 11 '16 at 13:34
  • @TitoneMaurice, I am not familiar with MSVC toolchain, sorry. – SergeyA May 11 '16 at 13:52
3

The inline keyword isn't just about expanding the implementation at the point it was called, but in fact primarily about declaring that multiple definitions of a function may exist in a given translation unit.

This has been covered in other questions before, which can it explain much better than I :)

Why are class member functions inlined?

Is "inline" implicit in C++ member functions defined in class definition

Community
  • 1
  • 1
  • Thanks. Quoting from the answer from the second link you gave: "The inline keyword has approximately nothing to do with inlining. It's about whether multiple identical definitions of the function are permitted in different TUs." – Zebrafish May 11 '16 at 13:44
1

It's much easier for the compiler to expand a function inline if it has seen the definition of that function. The easiest way to let the compiler see the definition of a function in every translation unit that uses that function is to put the definition in a header and #include that header wherever the function will be used. When you do that you have to mark the definition as inline so that the compiler (actually the linker) won't complain about seeing the definition of that function in more than one translation unit.

Pete Becker
  • 74,985
  • 8
  • 76
  • 165