The recurrence relation we can recover from the structure of the pseudocode. We can let T(n)
represent the time taken by the algorithm as a function of the input size. For n = 1
, the time is constant, say T(1) = a
. Our question now is for larger n
, how can we express T(n)
?
We will be in the else
clause for n > 1
. We do some extra work - let's call it b
- and then call the function twice, once for an input of size floor(n/2)
and once for an input of size ceiling(n/2)
. So we can write this part of the recursion as T(n) = b + T(floor(n/2)) + T(ceiling(n/2))
. We can now write out some terms.
n T(n)
1 a
2 b + a + a = b + 2a
3 b + b + 2a + a = 2b + 3a
4 b + b + 2a + b + 2a = 3b + 4a
5 b + b + 2a + 2b + 3a = 4b + 5a
... ...
k = (k-1)b + (k)a = kb - b + ka = k(a + b) - b
We find a guess that T(n) = (a + b)n - b
for some constants a
and b
that depend upon the cost of amounts of work we might take as constant (note that computing (f + l) / 2
is not really constant in terms of n
, but it will not change our analysis). We can prove this using mathematical induction:
T(1) = a = (a + b)(1) - b
is right;
- Assume that
T(n) = (a + b)n - b
for all n <= k
.
- Does
T(k + 1) = (a + b)(k + 1) - b
hold? Remember that T(k + 1) = b + T(floor((k+1)/2)) + T(ceiling((k+1)/2)
. Suppose k+1
is even and m = (k+1)/2
. Then T(k+1) = b + 2T(m) = b + 2[(a + b)(m) - b] = b + 2(m)(a+b) - 2b = (2m)(a+b) - b = (k+1)(a+b) - b, as required. The case where
k + 1` is odd is left as an exercise.
This is linear.