The below code calculates Fibonacci numbers by an exponentially slow algorithm:
#include <cstdlib>
#include <iostream>
#define DEBUG(var) { std::cout << #var << ": " << (var) << std::endl; }
constexpr auto fib(const size_t n) -> long long
{
return n < 2 ? 1: fib(n - 1) + fib(n - 2);
}
int main(int argc, char *argv[])
{
const long long fib91 = fib(91);
DEBUG( fib91 );
DEBUG( fib(45) );
return EXIT_SUCCESS;
}
And I am calculating the 45th Fibonacci number at run-time, and the 91st one at compile time.
The interesting fact is that GCC 4.9 compiles the code and computes fib91
in a fraction of a second, but it takes a while to spit out fib(45)
.
My question: If GCC is smart enough to optimize fib(91)
computation and not to take the exponentially slow path, what stops it to do the same for fib(45)
?
Does the above mean GCC produces two compiled versions of fib
function where one is fast and the other exponentially slow?
The question is not how the compiler optimizes fib(91)
calculation (yes! It does use a sort of memoization), but if it knows how to optimize the fib
function, why does it not do the same for fib(45)
? And, are there two separate compilations of the fib
function? One slow, and the other fast?