25

While reading Stroustrup's "The C++ Programming Language", I came across this sentence on p. 108:

"The style of syntax analysis used is usually called recursive descent; it is a popular and straightforward top-down technique. In a language such as C++, in which function calls are relatively cheap, it is also efficient."

Can someone explain why C++ function calls are cheap? I'd be interested in a general explanation, i.e. what makes a function call cheap in any language, as well, if that's possible.

jds
  • 7,910
  • 11
  • 63
  • 101
  • I'd assume it's comparable to most statically typed, compiled languages which produce linkable object files. As opposed to script languages and dynamically typed languages which may look up type information and function names at run time. – Peter - Reinstate Monica May 03 '14 at 12:02
  • 4
    Yes, it's just referring to the fact that a function call just needs to fill some buffers and jump, without needing to look up the function (which can be costly). But beware; while it might be cheap, you can easily get a stack overflow if you recurse too deeply (usually a few thousand levels, but depends on many things). A more stable algorithm might implement its own variable-sized stack, or use a method which does not require a stack at all. – Dave May 03 '14 at 12:06
  • 1
    @MEAIN Nonsense. Memory for the activation record is allocated from the system stack, not statically (that would break recursion) or dynamically (pointless). Both in Java and C++. Java requires more work than C++ for some tasks, but this is not one of them. –  May 03 '14 at 16:42
  • @MEAIN static and dynamic (In the context of memory management) has nothing to do with static and dynamic (In the context of function calls and call resolution). Don't mix different things. They use the same terms (Cualifiers), but talk about completely different things. – Manu343726 May 03 '14 at 16:53
  • cheap compared to what alternative? – Richard Hodges May 03 '14 at 19:26
  • 4
    A bit of historical context might help: The quoted passage appears in the first edition of the book, written in 1986. So Stroustrup didn't mean relative to Java, which came out much later. Also, the surrounding example is a parser where every non-terminal element in the grammar corresponds to a function. He's implying he wouldn't use that technique when programming in a language that had slow function calls. In 1986, assembly, C, and C++ had equally fast function calls; Basic, Simula, and LISP were slower for reasons given in the answers. – gatkin May 03 '14 at 20:55

3 Answers3

18

Calling C or C++ functions (in particular when they are not virtual) is quite cheap since it involves only a few machine instructions, and a jump (with link to return address) to a known location.

On some other languages (e.g. Common Lisp, when applying an unknown variadic function), it may be more complex.

Actually, you should benchmark: many recent processors are out-of-order & superscalar, so are doing "several things at a time".

However, optimizing compilers are capable of marvellous tricks.

For many functional languages, a called function is in general a closure, and need some indirection (and also passing the closed values).

Some object oriented languages (like Smalltalk) may involve searching a dictionary of methods when invoking a selector (on an arbitrary receiver).

Interpreted languages may have a quite larger function call overhead.

Russia Must Remove Putin
  • 374,368
  • 89
  • 403
  • 331
Basile Starynkevitch
  • 223,805
  • 18
  • 296
  • 547
8

Function calls are cheap in C++ compared to most other languages for one reason: C++ is built upon the concept of function inlining, whereas (for example) java is built upon the concept of everything-is-a-virtual-function.

In C++, most of the time you're calling a function, you're not actually generating an call instruction. Especially when calling small or template functions, the compiler will most likely inline the code. In such case the function call overhead is simply zero.

Even when the function is not inlined, the compiler can make assumptions about what the function does For example: the windows X64 calling convention specifies that the registers R12-R15, XMM6-XMM15 should be saved by the caller. When calling a function, the compiler must generate code at the call site to save and restore these registers. But if the compiler can prove that the registers R12-R15, XMM6-XMM15 are not used by the called function such code can be omitted. This optimization is much harder when calling a virtual function.

Sometimes inlining is not possible. Common reasons include the function body not being available at compile time, of the function being too large. In that case the compiler generates an direct call instruction. However because the call target is fixed, the CPU can prefetch the instructions quite well. Although direct function calls are fast, there is still some overhead because the caller needs to save some registers on the stack, increase the stack pointer, etc.

Finally, when using an java function call or C++ function with the virtual keyword, the CPU will execute an virtual call instruction. The difference with an direct call is that the target is not fixed, but instead stored in in memory. The target function may change during the runtime of the program, which means that the CPU cannot always prefetch the data at the function location. Modern CPU's and JIT compilers have various tricks up their sleeve to predict the location of the target function, but it is still not as fast as direct calls.

tldr: function calls in C++ are fast because C++ implements inlining and by default uses direct calls over virtual calls. Many other languages do not implement inlining as well as C++ does and utilize virtual functions by default.

Stefan
  • 1,539
  • 13
  • 11
  • I don't have Stroustrup in front of me, but the context makes it sound like he is talking about recursive function calls here, which of course cannot be inlined. – Nate Eldredge May 03 '14 at 14:46
  • 1
    @NateEldredge: Actually some inlining can occur even for recursive functions: http://stackoverflow.com/q/190232/951890 – Vaughn Cato May 03 '14 at 15:42
  • @Nate Weird. Recursive function calls are one of the cases where, in theory, it should be quite easy to optimize those virtual calls away. Saying that c++ function calls are fast makes more sense in my opinion when not talking about recursive functions. – Stefan May 03 '14 at 16:25
  • 1
    JIT compilers *can* (speculatively; look up polymorphic inline caching) resolve and inline virtual functions, and AFAIK JVMs are very aggressive about it (partly because, as you mention, every method call is virtual) but of course this happens at run time and hence only pays off if the call is executed very frequently. –  May 03 '14 at 16:48
  • Most of the time, however, functions are not inlined. Inlining does not happen except with really tiny functions, called in tight loops. Can you provide some credible source to that claim? And Stroustrop is not talking about inlining. He explicitly mentions **function calls** as being cheap. Also, the main difference between C++ and C is the introduction of OOP, which is pretty much based on virtual tables. So, if you are using C++ to write code using solely inlined static functions, then you are using it wrong. – vgru May 03 '14 at 16:48
  • @Gree i highly disagree that you're doing it wrong when using C++ without virtual functions. In C++ you only pay for what you use, and in performance sensitive code i definitely not want to use virtual calls. – Stefan May 03 '14 at 17:15
  • Considering that modern JVMs inline *way* more aggressively than any C++ compiler ever could, I find the first sentence rather entertaining. And obviously you can make functions non-virtual in Java as well if you want, although the default in Java is certainly flawed from a design perspective. – Voo May 03 '14 at 20:12
6

The cost of a function call is associated with the set of operations required to go from a given scope to another, i.e., from a current execution to the scope of another function. Consider the following code:

void foo(int w) { int x, y, z; ...; }
int main() { int a, b, c; ...; foo(b); ...; }

The execution starts in main(), and you may have some variables loaded into registers/memory. When you reach foo(), the set of variables available for use is different: a, b, c values are not reachable by function foo() and, in case you run out of available registers, the values stored will have to be spilled to memory.

The issue with registers appears in any language. But some languages needs more complex operations to change from scope to scope: C++ simply pushes up whatever is required by the function into the memory stack, maintaining pointers for surrounding scopes (in this case, while running foo(), you'd be able to reach the definition of w in main()'s scope.

Other languages must allocate and pass forth complex data to allow access for surrounding scope variables. These extra allocations, and even searches for specific labels within the surrounding scopes, can raise the cost of function calls considerably.

Russia Must Remove Putin
  • 374,368
  • 89
  • 403
  • 331
Rubens
  • 14,478
  • 11
  • 63
  • 92