3

I'm using Racket (derivative of Scheme/Lisp), and I wrote this Fibonacci Algorithm which uses Accumulators:

(define (fibonacci* n)
  (local (; NaturalNumber NaturalNumber NaturalNumber -> NaturalNumber
          ; Add accumulators for current and previous fibonacci numbers
          (define (fibonacci-acc x current previous)
            (if (= x n) current
                (fibonacci-acc (add1 x) (+ current previous) current))))
    (fibonacci-acc 0 0 1)))

This is a BIG improvement over a function that doesn't use accumulators, like the following:

(define (fibonacci n)
  (if (<= n 1) n
      (+ (fibonacci (- n 1))
         (fibonacci (- n 2)))))

But how can I set up recurrence equations to calculate how much more efficient it is?

Sean_J
  • 55
  • 7
  • 1
    did you at least try to compute the complexity of your code? – eliasah Apr 15 '15 at 14:50
  • I only just learned how to do Recurrence, and we've always been given some formula like T(n)=T(n/3)+n^2. I haven't learned how to analyze code and come up with some recurrence formula like that. I'm came here in hopes that someone would be able to show me how using code that I wrote so I could wrap my head around it better. – Sean_J Apr 15 '15 at 14:53
  • I don't want to single handedly close as dupe because I have accepted answer - but it really is a dupe :| http://stackoverflow.com/q/7547133/572670 – amit Apr 15 '15 at 18:27

3 Answers3

4

Let T(n) be the time to compute (fib n) where fib is:

(define (fib n)
  (if (<= n 1) 
      n
      (+ (fib (- n 1))
         (fib (- n 2)))))

Since the body of fib has a conditional (if (<= n 1) ...) we need to consider two cases.

If n<=1, then the expression n is evaluated. It is a variable reference and it takes constant time. Let's set the time to 1 (unit of time).

In short we have:

T(0) = 1
T(1) = 1

For an n larger than 1, the expression (+ (fib (- n 1)) (fib (- n 2))))) is evaluated. The time it takes to evaluate (fib (- n 1)) is by definition of T precisely T(n-1). Likewise the it takes time T(n-2) to compute (fib (- n 2)))). The results of the two subexpressions are then added (+ ...). The time it takes to add two fixnums (63 bit numbers) are more or less the same as the time of a variable reference. So we set the time to do the addition to 1. Together we get that:

T(n) = 1 + T(n-1) + T(n-2)    for n>1

The three recurrence equations are thus:

T(0) = 1
T(1) = 1
T(n) = 1 + T(n-1) + T(n-2)    for n>1

See page 8 of the following document for an analysis of T:

http://users.cecs.anu.edu.au/~peter/seminars/RunningTimeInduction.

It is proven by induction that T(n)<=2^n.

soegaard
  • 30,661
  • 4
  • 57
  • 106
3

The second function computes the same thing multiple times, so its complexity is exponential (O(2^n), you can get a better bound, but it's in that ballpark).

         f(5)
       /     \
     f(3)    f(4)
   /  |     /    \
f(1) f(2)  f(2)   f(3)
          /    \    |   \
        f(0)  f(1) f(1)  f(2)
                         /   \
                       f(0)  f(1)

As you can see by drawing the recursion tree, even for small values multiple values are recomputed very often.

To see that it is really exponential, imagine the tree for f(6): it will contain this tree because it calls f(5) and the tree for f(4), so it will be a tree of almost double the size.

The solution that uses accumulators avoids this by finding the required fibonacci number bottom-up, thus only computing what is absolutely necessary. This makes it O(n) to get the n-th fibonacci number.

To get an idea about how much more efficient the first algorithm is, look at how a linear function grows compared to an exponential one:

n   |   2^n
===========
1   |   2
2   |   4
3   |   8
4   |  16
5   |  32
6   |  64

So basically, linear is a lot faster indeed, as you've established experimentally.

IVlad
  • 43,099
  • 13
  • 111
  • 179
  • Is `O(2^n)` not doubly exponential, since `n` is exponential in the input size? – Lutz Lehmann Apr 15 '15 at 15:33
  • And it should be `O(n·2^n)`, the function values have a value greater than `1.6^(n-1)-1`, so the bit size of the output is alone is `O(n)`, i.e., exponential in the input size `log2(n)`. – Lutz Lehmann Apr 15 '15 at 15:45
  • @LutzL - I considered addition and number of bits to be constant. Of course, if you want to be thorough, you should take those into account as well, but the OP asked a basic question and I think considering that would only cause confusion. – IVlad Apr 15 '15 at 15:50
  • Then you should not use the asymptotic O(n) notation. Or specify that you compute modular Fibonacci numbers. Remember that `F(n)` is in its value exponential in `n`. `F(n)` requires about `0.7*n` bits for its representation, with `uint64` you can express up to `F(91)`. For large `n` you will thus need BigInt data types. – Lutz Lehmann Apr 15 '15 at 15:56
  • @LutzL - plenty of sources ignore those things for simplicity - this isn't a formal college course on the subject. They're not essential to understanding why an algorithm is faster than another and by how much. You can consider the complexities I mentioned as dealing with the number of recursive calls each algorithm makes, then they're correct without any assumptions, and still get the point across. – IVlad Apr 15 '15 at 16:09
  • "Plenty of sources" also do not include BigInteger or general variable digit length algorithms. In this sense, Fibonacci is a very bad example, exponential already when only comparing the values or encoding of input and output. -- Yes to the calls count; and then one most clever student comes with an iterative implementation without any function calls and calls it O(1). – Lutz Lehmann Apr 15 '15 at 16:24
2

Well it is easy, if you compute new number, you just take two smaller numbers which you already know.

Every new number is computed in constant time.

Therefore complexity for n-th fibonnacci number is O(n) - linear.

libik
  • 22,239
  • 9
  • 44
  • 87
  • Ah, ok. Wow, that's really neat! Thanks. – Sean_J Apr 15 '15 at 14:56
  • Since the computation of the `n`th number requires `n` additions of numbers of digit size `O(n)`, this algorithm has the complexity `O(n²)`. Remember that the asymptotic estimates must hold for arbitrarily large `n`, so numbers that fit in ALU registers are not, by far, the end. And since that is `O(2^(2*log2(n)))` and `log2(n)` is the input size, should this algorithm not qualify also as exponential (single, the naive one is doubly so)? – Lutz Lehmann Apr 15 '15 at 15:37
  • @LutzL - you are only half right, addition is considered in most cases as constant operation. – libik Apr 15 '15 at 15:43
  • So the addition of 10 digit numbers counts the same as the addition of 10000 digit numbers? It would be understandable if the algorithm includes a like number of multiplications, since their cost dominates. However, in the absence thereof ... – Lutz Lehmann Apr 15 '15 at 15:48
  • @LutzL - it always depends on "computing model" you are considering. The most common one takes basic operation with numbers as constant. – libik Apr 15 '15 at 15:53
  • We are talking here about a "computing model" that includes multi-digit or BigInteger data types. Which I would say do not fall under "most common". I might be wrong, but that most common model would be for numerical computations in floating point where indeed the arithmetic operations have a constant cost. – Lutz Lehmann Apr 15 '15 at 15:59