1

I have an algorithm i'm trying to implement. I've been asked to determine a function that describes its worst case running time. As input, it takes an array of some length (lets call it n). Then what it does is as follows:

if (n==0){ return 0;}
else if(n==1){return A[0];}
else{
     return f(n-1)+f(n-2)
}

Sorry if I'm a tad sparse on the implementation details, but in a sense, its rather similar to something like the fibbanoci sequence. I'm thinking the worst case running time of this algorithm is t(n)=2^n, because if n is large, it will decompose into 2 separate calculations, which in turn will split into 2 more and so on. I'm just not sure how to formally justify this

user979616
  • 273
  • 1
  • 6
  • 15
  • See http://stackoverflow.com/questions/4326634/complexity-for-recursive-functions-time-and-space – jonvuri Sep 29 '12 at 18:52
  • Thank you. However, I'm sorta looking for a bit of an explanation of how exactly you determine these things. I've looked around, but people just seem to start throwing notation and answers around without a good explanation. I'm kinda new to this whole thing – user979616 Sep 29 '12 at 18:56
  • 1
    There's no quick answer. You need to understand recurrence relations in the context of algorithmic complexity in order to prove your result. The accepted answer on the question I linked links to a page that explains it: http://www.cs.duke.edu/~ola/ap/recurrence.html (See heading "The Recurrence Relation" and beyond) – jonvuri Sep 29 '12 at 19:01
  • With what you show from the code, there is no worst case, you only use A[0] whatever the value of n ... show us more – Kwariz Sep 29 '12 at 19:36

1 Answers1

5

Let's first get a recursion for the running time.

T(0) = T(1) = 1

since both just return a number (one is an array-lookup, but that's constant time too). And for n > 1 we have

T(n) = T(n-1) + T(n-2) + 1

since you evaluate f(n-1) and f(n-2) and add the two results. That's almost the same recurrence as the Fibonacci sequence itself, F(n) = F(n-1) + F(n-2), and the result is closely related.

 n | T(n) | F(n)
----------------
 0 |   1  |   0
 1 |   1  |   1
 2 |   3  |   1
 3 |   5  |   2
 4 |   9  |   3
 5 |  15  |   5
 6 |  25  |   8
 7 |  41  |  13
 8 |  67  |  21
 9 | 109  |  34
10 | 177  |  55
11 | 287  |  89

If you look at the values, you see that

T(n) = F(n+2) + F(n-1) - 1

and can prove that with induction, if you need to.

Since the terms of the Fibonacci sequence are given by F(n) = (φ^n - (1-φ)^n)/√5, where φ = (1 + √5)/2, you see that the complexity of your f is also Θ(φ^n), like that of the Fibonacci sequence. That's better than Θ(2^n), but still exponential, so calculation using this way is only feasible for small n.

Will Ness
  • 70,110
  • 9
  • 98
  • 181
Daniel Fischer
  • 181,706
  • 17
  • 308
  • 431
  • if `A[0] == 1` isn't the OP definition *exactly* the definition of Fibonacci sequence? Fibonacci calculation too adds the two sub-cases. So why there should be _any_ difference? – Will Ness Sep 29 '12 at 19:59
  • The exact code in the last return is slightly different, but is of the same level of complexity. Its just a comparison with some other array element. – user979616 Sep 29 '12 at 20:02
  • @WillNess Whatever `A[0]` is, the sequence will be `A[0]*F(n)`, so the complexity of computing it the naive way is exactly that of computing `F(n)` the naive way. The cost of computing `F(n)` the naive way is usually called `nfib(n)`, which can be expressed like above by Fibonacci numbers. – Daniel Fischer Sep 29 '12 at 20:07
  • Thanks, Daniel; I've edited to clarify the sentence, with apologies for intrusion. :) – Will Ness Sep 29 '12 at 20:10
  • @WillNess Yeah, I guess that's clearer for people not being able to read my mind, thanks. – Daniel Fischer Sep 29 '12 at 20:14