1

Here is the PDF I will referring to.: https://www.docdroid.net/htE62SR/215-216.pdf

The algorithm I'm referring to (which can also be found in the PDF file, above) is the following.:

public static long fibonacciBad(int n) {
    if(n <= 1)
        return n;
    else
        return fibonacciBad(n-2) + fibonacciBad(n-1);
}

I'm trying to understand why fibonacciBad is O( 2^(n/2) ), assuming that I'm correct in that being what the PDF is implying.

I suspect that it has to do with using B for every second numbers of calls in A, but the specifics are unclear to me. Also, could someone please give me an intuitive explanation as to why it's okay for it to be every second number of calls in order to be considered exponential (instead of every single number of calls being at least double the previous one)? (I'm making up this alphabetical notation, A and B, for below. This notation isn't used in the PDF linked to here.)

A: 
c_0 = 1 
c_1 = 1 
c_2 = 1 + c_0 + c_1 = 1 + 1 + 1 = 3 
c_3 = 1 + c_1 + c_2 = 1+ 1 + 3 = 5 
c_4 = 1 + c_2 + c_3 = 1 + 3 + 5 = 9 
c_5 = 1 + c_3 + c_4 = 1 + 5 + 9 = 15 
c_6 = 1 + c_4 + c_5 = 1 + 9 + 15 = 25 
c_7 = 1+ c_5 + c_6 = 1 + 15 + 25 = 41 
c_8 = 1 + c_6 + c_7 = 1 + 25 + 41 = 67 

B: 
1 + 2 + 4 + ... + 2^(n-1) = 2^n - 1
NobodyNada
  • 7,529
  • 6
  • 44
  • 51
  • 4
    I think this question would be a better fit for [Computer Science Stack Exchange](https://cs.stackexchange.com) as it about algorithm complexity and not directly about programming, but be sure to check their [help center](https://cs.stackexchange.com/help) to make sure your question is on-topic there. – NobodyNada Sep 30 '17 at 21:42

2 Answers2

1

The naive recursion version of Fibonacci is exponential by design due to repetition in the computation:

At the root you are computing:

F(n) depends on F(n-1) and F(n-2)

F(n-1) depends on F(n-2) again and F(n-3)

F(n-2) depends on F(n-3) again and F(n-4)

then you are having at each level 2 recursive calls that are wasting a lot of data in the calculation, the time function will look like this:

T(n) = T(n-1) + T(n-2) + C, with C constant

T(n-1) = T(n-2) + T(n-3) > T(n-2) then

T(n) > 2*T(n-2)

...

T(n) > 2^(n/2) * T(1) = O(2^(n/2))

This is just a lower bound that for the purpose of your analysis should be enough but the real time function is the same Fibonacci formula and the closed form is known to be exponential of the golden ratio.

In addition, you can find optimized versions of Fibonacci using dynamic programming like this:

static int fib(int n)
{
    /* memory */
    int f[] = new int[n+1];
    int i;

    /* Init */
    f[0] = 0;
    f[1] = 1;

    /* Fill */
    for (i = 2; i <= n; i++)
    {
        f[i] = f[i-1] + f[i-2];
    }

    return f[n];
}

That is optimized and do only n steps but is also exponential.

Cost functions are defined from Input size to the number of steps to solve the problem. When you see the dynamic version of Fibonacci (n steps to compute the table) or the easiest algorithm to know if a number is prime (sqrt(n) to analyze the valid divisors of the number). you may think that these algorithms are O(n) or O(sqrt(n)) but this is simply not true for the following reason: The input to your algorithm is a number: n, using the binary notation the input size for an integer n is log2(n) then doing a variable change of

m = log2(n) // your real input size

let find out the number of steps as a function of the input size

m = log2(n)
2^m = 2^log2(n) = n

then the cost of your algorithm as a function of the input size is:

T(m) = n steps = 2^m steps

and this is why the cost is an exponential.

Miguel
  • 3,786
  • 2
  • 19
  • 32
1

You can try to imagine to yourself the way this function will be computed:

                             fab(n)
                            /      \
                           /        \
                      fab(n-1)     fab(n-2)
                      /   \          /    \
                     /     \        /      \
               fab(n-2) fab(n-3) fab(n-3)  fab(n-4)
         ....    ....     ....      ... ...       ...   ...                 
           /    \      /     \      /     \         /    \ 
     fab(1) fab(1) fab(1) fab(1) fab(1) fab(1)  fab(1)  fab(1)

So this is a tree of height n, the overall number of nodes in the tree is 2^(n/2), hence the complexity of the computation O(2^n).

As you can see there are quite a few repetitions of computations of same number, therefore you can reduce number of computation by simply storing those results in the cache, hence receiving complexity of O(n).

Artem Barger
  • 40,769
  • 9
  • 59
  • 81
  • This is not exactly true. You can do T(n) = T(n-1) + T(n-2) > 2*T(n-2) applying the same inequality in n you will have T(n) > 2^(n/2)*T(1) which is O(2^(n/2)). Your results are similar but not accured enough. – Miguel Sep 30 '17 at 21:53
  • The exact solution is the Fibonacci closed form itself: https://en.wikipedia.org/wiki/Fibonacci_number – Miguel Sep 30 '17 at 21:54
  • I was trying to provide an intuition, obviously this is not a binary tree, using big-o notation, O(2^(n/2)) same as O(2^n). So, yes thanks for your comment, will edit and update my answer. – Artem Barger Sep 30 '17 at 21:56
  • My point is that academically speaking the Fibonacci function brings to the table the idea of recursion not being exactly the same in each call. If you assume this, you are loosing the important part of the analisys. – Miguel Sep 30 '17 at 22:00
  • > idea of recursion not being exactly the same in each call Well to be precise many recursion bring this property on the table, you can think of merge sort for example. And in fibonacci case, the important thing is the fact that you can use additional memory to memorize previous results hence saving running time. WRT, complexity analysis O(2^n) is similar to O(2^(n/2)). – Artem Barger Sep 30 '17 at 22:05
  • Don't take it personally, because it isn't. The merge sort is not an example of this at all. Merge Sort divides the collection as close to the middle as possible and you can safely assume T(n) = 2*T(n) + O(n). Following your line of thinking QuickSort is a better example. Anyway, those algorithms are more advanced than Fibonacci. – Miguel Sep 30 '17 at 22:10
  • O(2^(n/2)) is **not the same** as O(2^n). The former is about O(1.414^n), because 2^(0.5n) is equivalent to (2^0.5)^n. Neither is the correct complexity for the recursive Fibonacci function. – kaya3 Dec 22 '19 at 11:31