For one example, C++ lacks the keyword restrict
. Used correctly that sometimes allows the compiler to produce faster code.
- it's fairly rare in practice to see benefits from
restrict
, but it happens,
- there are plenty of occasions when a C++ or C compiler can deduce (after inlining) that the necessary conditions for
restrict
apply, and act accordingly, even though the keyword isn't used,
- C++ compilers that are also C compilers might provide
restrict
or __restrict
as an extension anyway.
But occasionally (or quite commonly in some domains), restrict
is a really good optimization, which I'm told is one of the reasons for still using Fortran. It's certainly one of the reasons for the strict aliasing rules in both C and C++, which give the same optimization opportunity as restrict
for a more limited set of circumstances.
Whether you "count" this depends to an extent what you consider "equivalent code". restrict
never changes the meaning of a program that uses it validly -- compilers are free to ignore it. So it's not a stretch to describe the program that uses it (for the eyes of the C compiler) and the program that doesn't (for C++) as "equivalent". The version with restrict
took more (perhaps only slightly more) programmer effort to create, since the programmer has to be sure that it's correct before using it.
If you mean, is there a program that is valid C and also valid C++, and has the same meaning in both, but implementations are somehow constrained by the C++ standard to run it slower than C implementations, then I'm pretty sure the answer is "no". If you mean, are there any potential performance tweaks available in standard C but not in standard C++, then the answer is "yes".
Whether you can get any benefit from the tweak is another matter, whether you'd have got more benefit for the same amount of effort with a different optimization available in both languages is another, and whether any benefit is big enough to base your choice of language on is still another. It's laughably easy to interoperate between C and C++ code, so if you have any reason at all to prefer C++, then like any optimization that alters your preferred way of coding, switching to C would normally be something you'd do when your profiler tells you your function is too slow, and not before.
Also, I'm trying to convince myself one way or the other whether the potential for exceptions costs performance, assuming that you never use any type that has a non-trivial destructor. I suspect that in practice it probably can (and that this is a contradiction to the "don't pay for what you don't use" principle), if only because otherwise there'd be no point gcc having -fno-exceptions
. C++ implementations bring the cost down pretty low (and it's mostly in rodata, not code), but that doesn't mean it's zero. Latency-critical code may or may not also be binary-size-critical code.
Again it might depend what you mean by "equivalent" code -- if I have to compile my so-called "standard C++ program" using a non-standard compiler (such as g++ -fno-exceptions
) in order to "prove" that the C++ code is as good as the C, then in some sense the C++ standard is costing me something.
Finally, the C++ runtime itself has a start-up cost, which is not necessarily identical to the C runtime start-up cost for the "same" program. You can generally hack about to reduce the cost of both by removing things you don't strictly need. But that is effort, implementations don't necessarily do it for you completely every time, so it's not strictly true that in C++ you don't pay for what you don't use. That's the general principle, but achieving it is a quality of implementation issue.