0
Foo(A,f,l) 
**Precondition: A[f ...l] is an array of integers, f,l are two naturals ≥ 1 with f ≤ l. 
if (f = l) then 
     return A[f]
else 
     m ← floor of((f+l)/2) 
     return min(Foo(A,f,m), Foo(A,m + 1,l)) 
end if

Correct me if I'm wrong, but I think this code returns the smallest integer of the array. But how do I figure out what the recurrence relation that describe the time complexity in terms of array A? Could you please guide me to the solution so I can understand? I don't even know where to begin.

jbsu32
  • 1,036
  • 1
  • 11
  • 31
Bucciarati
  • 17
  • 5
  • Take some example values for A, f, l and draw what calls are made, i.e, the full recurrence. You’ll see that in each pass, the search space is halved. The complexity is logarithmic. – xrisk Sep 24 '16 at 20:29
  • With that precondition `f <= l`, you can't ensure it is achieved in your internal calls. Also, it seems to be a useless precondition, maybe you should delete it. You can neither ensure that `m >= 1` – Santiago Gil Sep 24 '16 at 20:51
  • Look at this [question](http://stackoverflow.com/questions/3255/big-o-how-do-you-calculate-approximate-it) – DarthRubik Sep 24 '16 at 21:23
  • Let `n` be the size of the array so `n = l - f + 1`. `T(1) = 1`, so what's `T(n)`? – fgb Sep 24 '16 at 21:26

2 Answers2

0

The recurrence relation we can recover from the structure of the pseudocode. We can let T(n) represent the time taken by the algorithm as a function of the input size. For n = 1, the time is constant, say T(1) = a. Our question now is for larger n, how can we express T(n)?

We will be in the else clause for n > 1. We do some extra work - let's call it b - and then call the function twice, once for an input of size floor(n/2) and once for an input of size ceiling(n/2). So we can write this part of the recursion as T(n) = b + T(floor(n/2)) + T(ceiling(n/2)). We can now write out some terms.

n    T(n)
1    a
2    b + a + a = b + 2a
3    b + b + 2a + a = 2b + 3a
4    b + b + 2a + b + 2a = 3b + 4a
5    b + b + 2a + 2b + 3a = 4b + 5a
...  ...
k    = (k-1)b + (k)a = kb - b + ka = k(a + b) - b

We find a guess that T(n) = (a + b)n - b for some constants a and b that depend upon the cost of amounts of work we might take as constant (note that computing (f + l) / 2 is not really constant in terms of n, but it will not change our analysis). We can prove this using mathematical induction:

  1. T(1) = a = (a + b)(1) - b is right;
  2. Assume that T(n) = (a + b)n - b for all n <= k.
  3. Does T(k + 1) = (a + b)(k + 1) - b hold? Remember that T(k + 1) = b + T(floor((k+1)/2)) + T(ceiling((k+1)/2). Suppose k+1 is even and m = (k+1)/2. Then T(k+1) = b + 2T(m) = b + 2[(a + b)(m) - b] = b + 2(m)(a+b) - 2b = (2m)(a+b) - b = (k+1)(a+b) - b, as required. The case wherek + 1` is odd is left as an exercise.

This is linear.

Patrick87
  • 27,682
  • 3
  • 38
  • 73
-1

You're right. It returns the smallest integer of the array.

And the complexity is

O(nlog(n)); n = size of the array

Explanation: In each call, you are breaking the array into two equal parts which calls up to f=l. It calls the function O(log(n)) times for each number in the array. So, total complexity is O(nlog(n))

jbsu32
  • 1,036
  • 1
  • 11
  • 31