Ok, mathematically (sort of ;) I get something like this:
T(n) = 2T(n/2) + c
T(1) = 1
Generalizing the equation:
T(n) = 2^k * T(n/2^k) + (2^k - 1) * c
T(1) = 1
n/2^k == 1
when k == logN
so:
T(n) = 2^logN * T(1) + (2^logN - 1) * c
since T(1) = 1
and applying logarithmic properties
T(n) = n * 1 + (n-1) * c
T(n) = n + n * c
T(n) = n * (1+c)
T(n) = O(n)
A clue that this is not O(n*logn)
is that you don't have to combine the two subproblems. Unlike mergesort
, where you have to combine the two sub arrays, this algorithm doesn't have to do anything with the recursive result, so its time can be expressed as constant c
.
UPDATE: Intuition behind
This algorithm should be O(n) because you visit each element in the array only once. It may not seem trivial because recursion never is.
For example, you divide the problem in two subproblems half the size, each subproblems is then divided in half the size too and will keep going on until each subproblem is of size 1. When you finish, you'll have n subproblems of size 1, which is n*O(1) = O(n)
.
The path from the beginning of first problem until N problems of size 1 is logarithmic, because in each step you subdivide in two. But in each step you do nothing with the result so this doesn't add any time complexity to the solution.
Hope this helps