-2

When calculating the 64th Fibonacci number, the first algorithm takes several hours, and the second one takes less than a second.

Why is the efficiency of the second algorithm so much higher than that of the first one?

They look very similar.

def fib_divide_recursion(n):
    if n <= 2:
        return n-1
    else:
        return fib_divide_recursion(n-1) + fib_divide_recursion(n-2)

def fib_linear_recursion(n, prev={}):
    if n <= 2:
        return n-1
    try:
        return prev[n]
    except KeyError:
        prev[n] = fib_linear_recursion(n - 1, prev) + 
            fib_linear_recursion(n - 2, prev)
        return prev[n]
Niayesh Isky
  • 1,100
  • 11
  • 17
Plutors
  • 72
  • 1
  • 9
  • Because the first one does more work. Count the number of function calls it performs. – n. m. could be an AI Apr 21 '19 at 03:29
  • Evaluating `fib_divide_recursion(25)` calls the function 150049 times, evaluating `fib_linear_recursion(25)` calls it 47 times, and the difference gets worse quickly as `n` grows. – Mark Apr 21 '19 at 03:31
  • Except of that, because you are using recursion for the most common example of iterative programming. – Klaus D. Apr 21 '19 at 03:33
  • @Klaus Yeah, but it can be valuable to learn Fibonacci recursively, too - in fact, it's one of the most common intro-to-recursion programs. – Niayesh Isky Apr 21 '19 at 03:52
  • And it is the most wrong example for teaching programming. – Klaus D. Apr 21 '19 at 03:54
  • 1
    @Klaus Well, I agree with you there, but unfortunately it's hard to counteract the tidal wave of precedent. Anyway, learning Fibonacci is useful for understanding the pros and cons of recursion vs. iteration, even if it's not ideal, simply because it's ridiculously well-established. – Niayesh Isky Apr 21 '19 at 04:24

3 Answers3

3

The second implementation is using "memoizing" to remember previously calculated Fibonacci values.

Consider that you're trying to calculate fib(5): You first have to calculate fib(4) and fib(3). fib(4) itself also requires you to calculate fib(3). In fact, for every Fibonacci number, you can either calculate each of the preceding Fibonacci numbers once and store them (this is the memoization method). Or, at much worse performance, you can recalculate each Fibonacci number that you need, even if you've already calculated it before. Clearly, without memoization, you will need to do exponentially more work, and for high Fibonacci numbers, this really makes a difference, as you've observed.

Niayesh Isky
  • 1,100
  • 11
  • 17
  • thanks, and why can't I use prev=[] replace prev{}, even I change KeyError to IndexError too – Plutors Apr 21 '19 at 04:31
  • @longchaos Really, that's a different question which you should post separately if you want it properly answered. But the short answer is that a list is not a dictionary: You can't index into a part of the list that doesn't exist yet and expect it to be automatically created. You would have to initialise it with the number of elements you're going to need, which would not be scalable. – Niayesh Isky Apr 21 '19 at 04:35
1

The complexity of the first algorithm is O(2^n).

The second one caches the results in prev, so it never computes fib_linear_recursion more than once for a given number. Its complexity is linear, O(n).

See this answer for more details.

Keldorn
  • 1,980
  • 15
  • 25
1

1st algorithm is using only recursion while 2nd algorithm is using Dynamic Programming i.e Recursion with memoization.

If you draw a tree for 1st algorithm you'll see repeated nodes. But with 2nd algorithm it is storing the already calculated node so program don't have to calculate for that again and again

sminmgr
  • 312
  • 3
  • 12