13

I want to find out the time complexity of the program using recurrence equations. That is ..

int f(int x)
{
if(x<1) return 1;
 else  return f(x-1)+g(x); 
}
int g(int x)
{
if(x<2) return 1;
 else return f(x-1)+g(x/2);
}

I write its recurrence equation and tried to solve it but it keep on getting complex

T(n) =T(n-1)+g(n)+c
         =T(n-2)+g(n-1)+g(n)+c+c
         =T(n-3)+g(n-2)+g(n-1)+g(n)+c+c+c
         =T(n-4)+g(n-3)+g(n-2)+g(n-1)+g(n)+c+c+c+c
         ……………………….
        ……………………..
        Kth time …..
        =kc+g(n)+g(n-1)+g(n-3)+g(n-4).. .. . … +T(n-k)

Let at kth time input become 1
Then n-k=1
         K=n-1
Now i end up with this..
T(n)= (n-1)c+g(n)+g(n-1)+g(n-2)+g(n-3)+….. .. g(1)

I ‘m not able to solve it further. Any way if we count the number of function calls in this program , it can be easily seen that time complexity is exponential but I want proof it using recurrence . how can it be done ?

enter image description here

Explanation in Anwer 1, looks correct , similar work I did.

The most difficult task in this code is to write its recursion equation. I have drawn another diagram , I identified some patterns , I think we can get some help form this diagram what could be the possible recurrence equation.

For f(2)

For f(3)

And I came up with this equation , not sure if it is right ??? Please help.

T(n) = 2*T(n-1) + c * logn
Community
  • 1
  • 1
siddstuff
  • 1,215
  • 4
  • 16
  • 37
  • What is the exact question? Do you want to prove that T_f(x) = Theta(c^x) for some c > 1? Or do you want an exact formula? Same for g? – Knoothe Mar 24 '13 at 07:04
  • this code is very confusing , we need to consider both function f(x) and g(x)... – siddstuff Mar 24 '13 at 07:22
  • You need to solve `g(x) = 2g(x - 1) - g((x - 1) / 2) + g(x / 2)`, then plug it back in `f(x)` to solve for f(x). – nhahtdh Mar 24 '13 at 07:57
  • @nhahtdh where did you get that equation from? – Ed Rowlett-Barbu Mar 24 '13 at 07:59
  • @Zadirion: Some derivation from the relation `f(x) = f(x - 1) + g(x)` and `g(x) = f(x - 1) + g(x/2)`. `g(x) = f(x - 1) + g(x/2) = f(x - 2) + g(x - 1) + g(x/2) = g(x - 1) - g((x - 1)/2) + g(x - 1) + g(x/2)`. (I have no idea how to solve this, though). – nhahtdh Mar 24 '13 at 08:02
  • 1
    @sidstuff: and the winner is....? Mr. Knoothe has given the tightest bound, and his answer deserves to be accepted, i.m.o, although I agree with Saeed that there is not much practical difference between 2^n and 3^n. And please do not tell us that your teacher (this was homework, wasn't it?) said that O(n) is the answer (although... then I win :-) BTW: I enjoyed the problem, and the discussion, gentlemen! – Hans Lub Mar 26 '13 at 09:17

4 Answers4

3

Ok, I think I have been able to prove that f(x) = Theta(2^x) (note that the time complexity is the same). This also proves that g(x) = Theta(2^x) as f(x) > g(x) > f(x-1).

First as everyone noted, it is easy to prove that f(x) = Omega(2^x).

Now we have the relation that f(x) <= 2 f(x-1) + f(x/2) (since f(x) > g(x))

We will show that, for sufficiently large x, there is some constant K > 0 such that

f(x) <= K*H(x), where H(x) = (2 + 1/x)^x

This implies that f(x) = Theta(2^x), as H(x) = Theta(2^x), which itself follows from the fact that H(x)/2^x -> sqrt(e) as x-> infinity (wolfram alpha link of the limit).

Now (warning: heavier math, perhap cs.stackexchange or math.stackexchange is better suited)

according to wolfram alpha (click the link and see series expansion near x = infinity),

H(x) = exp(x ln(2) + 1/2 + O(1/x))

And again, according to wolfram alpha (click the link (different from above) and see the series expansion for x = infinity), we have that

H(x) - 2H(x-1) = [1/2x + O(1/x^2)]exp(x ln(2) + 1/2 + O(1/x))

and so

[H(x) - 2H(x-1)]/H(x/2) -> infinity as x -> infinity

Thus, for sufficiently large x (say x > L) we have the inequality

H(x) >= 2H(x-1) + H(x/2)

Now there is some K (dependent only on L (for instance K = f(2L))) such that

f(x) <= K*H(x) for all x <= 2L

Now we proceed by (strong) induction (you can revert to natural numbers if you want to)

f(x+1) <= 2f(x) + f((x+1)/2)

By induction, the right side is

<= 2*K*H(x) + K*H((x+1)/2)

And we proved earlier that

2*H(x) + H((x+1)/2) <= H(x+1)

Thus f(x+1) <= K * H(x+1)

Knoothe
  • 1,218
  • 8
  • 14
  • Is H really in Theta(2^x)? For any k it will eventually outgrow k*2^x – Hans Lub Mar 24 '13 at 22:40
  • @HansLub: Yes, it is Theta(2^x). The limit of H(x)/2^x as x->oo is e^(1/2). (basically limit of (1+1/2x)^x = sqrt ((1 + 1/2x)^(2x)) = sqrt(e)) – Knoothe Mar 24 '13 at 23:47
  • @HansLub: I have edited the answer to provide links to wolfram alpha, to help convince better :) (I had an algebraic error earlier, but the proof seems essentially correct). – Knoothe Mar 25 '13 at 04:13
  • Nice! +1. Why not stop at the line (forall x>L) H(x) >= 2H(x-1) + H(x/2)? Because you also have (forall x) f(x) <= 2 f(x-1) + f(x/2) you are already done; for all x>2L H will grow faster than f. – Hans Lub Mar 25 '13 at 08:51
  • It was better to mention that you used my idea, at least first part of your answer strongly uses my proof and idea and you just develop it to get result in the second part (I spent around half an hour to solve and write it, after that it wasn't interesting to think more on one problem, but with this good start point you simply can develop it). but second part of your proof is good, but I think except my claim, my answer was good enough to this question. – Saeed Amiri Mar 25 '13 at 08:55
  • 2
    @Saeed Come on! All our proofs use the same recurrence H(n) = 2H(n-1) + H(n/2) (I even counted the additions, as an ever cost-conscious Dutchman...) You and I fall just short of proving O(2^n). Knoothes clever idea is to use the series (2+1/n)^n which is clearly (after polishing my spectacles :-) in O(2^n) – Hans Lub Mar 25 '13 at 09:16
  • @HansLub, I think my proof was strong proof (your proof was just descriptive and not good in mathematical point of view), I clearly stated that f(n) < (2+ε)^n, by defining ε value relative to n, this was strong start point that knoothes used to set H(x) = (2 + 1/x)^x, and use it in his future result. I don't saying without this he couldn't solve it, but may be without this it wasn't interesting to start to think furthur, like me, when I got this result, I said is enough for OP to know is as close as possible to 2^n, so I think this should mention that this is based on my idea and proof. – Saeed Amiri Mar 25 '13 at 09:27
  • 2
    (2+ε)^n uses a _fixed_ epsilon which will not get you O(2^n). (2+1/n)^n uses a _decreasing_ epsilon, and gets the result. – Hans Lub Mar 25 '13 at 09:51
  • @SaeedAmiri: I read your answer only after I solved this. Please don't accuse people of plagiarism like that, especially when you really didn't have anything substantial in the first place (in fact, you were claiming something incorrect, so your "strong" proof, is in fact incorrect). If you will notice, I was one of the first people to have commented on the question (and a now deleted answer, and that comment already talks about the recurrence, which everyone uses). – Knoothe Mar 25 '13 at 15:46
  • @HansLub: Yes, I could have stopped there, that is correct, but I suppose I wanted to make it clear :) – Knoothe Mar 25 '13 at 15:52
  • I cannot see your deleted answer (I can see 3 other deleted answers here but not yours), Anyway, if you didn't use my Idea I'm really sorry for that, because it was too close to my format, I thought you use it. – Saeed Amiri Mar 25 '13 at 15:59
  • btw, @SaeedAmiri, proving `f(n) = O((2+eps)^n)` for every fixed eps is not good enough, as (HansLub already realized), `f(n) = 2^n log n` satisfies that condition! – Knoothe Mar 25 '13 at 15:59
  • @SaeedAmiri: Not my answer, my comment on somebody else's answer. – Knoothe Mar 25 '13 at 16:00
  • 1
    @SaeedAmiri: btw, the core of this proof is that `H(x) >= 2H(x-1) + H(x/2)` (and that is the most difficult part). I just saw an edit to your answer, and that is still handwaving. You just cannot replace epsilon with 1/n like that! You are skipping over the hardest part of the proof. If you don't agree, would you consider posting on math.stackexchange to get expert comments? – Knoothe Mar 25 '13 at 16:02
  • First: The main part of my proof was on f(n) < 2f(n-1) +f(n/2), and because of this I was wondering to see similar proof, second I don't know what is hand waving, the relation ε > 1/(2+ε)^(n/2-1), holds for every n0>some constant (say 5) with ε=1/n, I think you didn't understand what I said or you like Hans mixed the constant n0 with original n, finally Yes I see your comment on other answer. – Saeed Amiri Mar 25 '13 at 16:08
  • @SaeedAmiri: Sorry, but f(x) <= 2f(x-1) + f(x/2) is the _easy_ observation to make. btw, here some mistakes in your proof: Your very first relation f(n) = 2f(n-1) + f(n/2-1) + f(n/4 -1) + .... is wrong. You are missing the factor of 2 for each. Second your statement about epsilon which you quote in the above comment is only useful for proving `f(n) < (2+epsilon)^n` if epsilon is constant. If it is a function of n, then you need something stronger (because you will effectively need to deal with 3 versions, (2+ 1/n)^n, (2+1/(n-1))^(n-1) and (2+2/n)^(n/2), and that is the difficult part. – Knoothe Mar 25 '13 at 16:24
  • 1
    You are essentially handwaving there. – Knoothe Mar 25 '13 at 16:26
  • Agree (n) = 2f(n-1) + f(n/2-1) + f(n/4 -1) +... is wrong but anyway, one inequality for upper bound was essential, and this is clear in my idea and proof (even wrong one), and finding this good upperbound was hard part not using wolfarmalpha (anyway I didn't talk about hard part, but if you think what you done is hard, I think finding good upper bound was point of this question). P.S: And now I fixed that hard and big mistake! – Saeed Amiri Mar 25 '13 at 16:29
  • 1
    @SaeedAmiri: The first mistake is irrelevant, as you seem to only use if to `f(n) <= 2f(n-1) + f(n/2)`, which is easy to see (see my answer for instance). It is obvious that the whole point was finding an upper bound (as a lower bound is trivial). It is _easy_ to prove an upper bound of c^x, for c > 2 (i.e. essentially (2+epsilon)^n). The hard part is coming up with an upper bound, which is itself O(2^x). This is completely lacking from your answer, and your current approach does not work (that is what HansLub was mean when he said the answers fall short). – Knoothe Mar 25 '13 at 16:43
  • @SaeedAmiri: btw, I gave those wolfram alpha links to make it easier to verify the answers. I have about 5-6 sheets of paper full of scribblings. And I will repeat again: It is easy to come up with a upper bound of `c^x` for c > 2 (that is what your answers does currently). The hard part is coming up with an upper bound, which is also O(2^x). – Knoothe Mar 25 '13 at 16:45
1

Using memoisation, both functions can easily be computed in O(n) time. But the program takes at least O(2^n) time, and thus is a very inefficient way of computing f(n) and g(n)

To prove that the program takes at most O(2+epsilon)^n time for any epsilon > 0:

Let F(n) and G(n) be the number of function calls that are made in evaluating f(n) and g(n), respectively. Clearly (counting the addition as 1 function call):

F(0) = 1; F(n) = F(n-1) + G(n) + 1
G(1) = 1; G(n) = F(n-1) + G(n/2) + 1

Then one can prove:

  • F and G are monotonic
  • F > G
  • Define H(1) = 2; H(n) = 2 * H(n-1) + H(n/2) + 1
  • clearly, H > F
  • for all n, H(n) > 2 * H(n-1)
  • hence H(n/2) / H(n-1) -> 0 for sufficiently large n
  • hence H(n) < (2 + epsilon) * H(n-1) for all epsilon > 0 and sufficiently large n
  • hence H in O((2 + epsilon)^n) for any epsilon > 0
  • (Edit: originally I concluded here that the upper bound is O(2^n). That is incorrect,as nhahtdh pointed out, but see below)
  • so this is the best I can prove.... Because G < F < H they are also in O((2 + epsilon)^n) for any epsilon > 0

Postscript (after seeing Mr Knoothes solution): Because i.m.h.o a good mathematical proof gives insight, rather than lots of formulas, and SO exists for all those future generations (hi gals!):

For many algorithms, calculating f(n+1) involves twice (thrice,..) the amount of work for f(n), plus something more. If this something more becomes relatively less with increasing n (which is often the case) using a fixed epsilon like above is not optimal. Replacing the epsilon above by some decreasing function ε(n) of n will in many cases (if ε decreases fast enough, say ε(n)=1/n) yield an upper bound O((2 + ε(n))^n ) = O(2^n)

Hans Lub
  • 5,513
  • 1
  • 23
  • 43
  • 2^n is the lower bound, so you should say it is Omega(2^n). – nhahtdh Mar 24 '13 at 10:07
  • The steps of the proof looks is so hand waving that it sends chill down my spine... – nhahtdh Mar 24 '13 at 13:19
  • but that is not how you sink a proof. which step fails, and why? – Hans Lub Mar 24 '13 at 13:21
  • `hence H in O((2 + epsilon)^n) for any epsilon > 0 hence H in O(2^n)` This step looks a bit fishy. Since 2 + epsilon > 2, won't lim (2 + epsilon)^n / 2^n --> Inf when n --> Inf? Then H is not in O(2^n). – nhahtdh Mar 24 '13 at 13:24
  • 1
    Even better with a counterexample: (2^n) * log(n) is in O((2 + epsilon)^n) for any epsilon > 0 but not in O(2^n). So my proof is incorrect. – Hans Lub Mar 24 '13 at 15:08
  • "hence H(n/2) / H(n-1) -> 0 for sufficiently large n hence H(n) < (2 + epsilon) * H(n-1) for all epsilon > 0 and sufficiently large n", just saying something is obvious is not proving, this is wrong way in proof, like saying because you cannot find any graph with odd degree vertex which has eulerian tour, means every graph which has eulerian tour has even degree for each vertex, this is obvious but not proof. – Saeed Amiri Mar 25 '13 at 09:31
  • "-> 0" means "< ε for any sufficiently large n". The rest is highschool algebra. Mathematical rigor is not the same as spelling out all the details! – Hans Lub Mar 25 '13 at 10:17
-1

Let f(0)=0 and g(0)=0

From the function we have,

f(x) = f(x - 1) + g(x) 
g(x) = f(x - 1) + g(x/2)

Substituting g(x) in f(x) we get,

f(x) = f(x-1) + f(x -1) + g(x/2)

∴f(x) = 2f(x-1) + g(x/2)

Expanding this we get,

f(x) = 2f(x-1)+f(x/2-1)+f(x/4-1)+ ... + f(1)

Let s(x) be a function defined as follows,

s(x) = 2s(x-1)

Now clearly f(x)=Ω(s(x)).

The complexity of s(x) is O(2x).

Therefore function f(x)=Ω(2x).

Deepu
  • 7,592
  • 4
  • 25
  • 47
  • 3
    The lower bound is quite clear. The more interesting aspect is the upper bound. – nhahtdh Mar 24 '13 at 08:41
  • 1
    -1: For not proving anything non-trivial, and the incorrect statements later. – Knoothe Mar 24 '13 at 22:18
  • 1
    @Deepu: Thanks for listening, but (and not to be rude), I really don't think this answer is worth the upvotes it got. All you have shown is that f(x) = Omega(2^x), which is quite easy to show. The hard part is proving that f(x) = O(2^x). (nhahtdh also commented on the same lines). So, the downvote stands. Sorry about that. btw, just s(x) = 2s(x-1) is enough, isn't it? Why add the c? – Knoothe Mar 25 '13 at 04:15
-1

I think is clear to see that f(n) > 2n, because f(n) > h(n) = 2h(n-1) = 2n.

Now I claim that for every n, there is an ε such that: f(n) < (2+ε)n, to see this, let do it by induction, but to make it more sensible at first I'll use ε = 1, to show f(n) <= 3n, then I'll extend it.

We will use strong induction, suppose for every m < n, f(m) < 3m then we have:

f(n) = 2[f(n-1) + f(n/2 -1) + f(n/4 -1)+ ... +f(1-1)]

but for this part:

A = f(n/2 -1) + f(n/4 -1)+ ... +f(1-1)

we have:

f(n/2) = 2[f(n/2 -1) + f(n/4 -1)+ ... +f(1-1]) ==>

A <= f(n/2)   [1]

So we can rewrite f(n):

f(n) = 2f(n-1) + A < 2f(n-1) +f(n/2),

Now let back to our claim:

f(n) < 2*3^(n-1) + 2*3^(n/2)==>
f(n) < 2*3^(n-1) + 3^(n-1) ==>
f(n) < 3^n.  [2]

By [2], proof of f(n)&in;O(3n) is completed.

But If you want to extend this to the format of (2+ε)n, just use 1 to replace the inequality, then we will have

for ε > 1/(2+ε)n/2-1 → f(n) < (2+ε)n.[3]

Also by [3] you can say that for every n there is an ε such that f(n) < (2+ε)n actually there is constant ε such that for n > n0, f(n)&in;O((2+ε)n). [4]

Now we can use wolfarmalpha like @Knoothe, by setting ε=1/n, then we will have:

f(n) < (2+1/n)n which results on f(n) < e*2n, and by our simple lower bound at start we have: f(n)&in; Θ(2^n).[5]

P.S: I didn't calculate epsilon exactly, but you can do it with pen and paper simply, I think this epsilon is not correct, but is easy to find it, and if is hard tell me is hard, and I'll write it.

Saeed Amiri
  • 22,252
  • 5
  • 45
  • 83
  • Yes, your last conclusion doesn't only hold for _some_ epsilon, but for _every_ epsilon > 0 – Hans Lub Mar 24 '13 at 15:58
  • @HansLub, in `O` notation when we say for every n > n0, means n0 is constant, and if you look at the relation between epsilon and n0, epsilon is dependent to n0, so I cannot say for every epsilon, because n0 is fixed, so I should focus on n0 not an epsilon, instead of saying for any epsilon there exists n0, I should say for every fixed n0 there exists epsilon such that ...., and the main point is this, we should find epsilon for n0. – Saeed Amiri Mar 24 '13 at 16:17
  • When you say that f is in O(g) you do not need to mention n0; you already state that there are c and n0 such that f(n) < c*g(n) for n > n0. And not only, as you say, for any n0 there exits an epsilon, but also for any epsilon there exists an n0 such that .... Hence for any epsilon, f is in O((2+epsilon)^n) – Hans Lub Mar 24 '13 at 16:28
  • @HansLub, actually I'm not agree with for all epsilon there exists n0 such that ..., because when epsilon is infinitely small then n0 will be infinitely large then n0 is not fixed constant. – Saeed Amiri Mar 24 '13 at 16:41
  • @HansLub: I can now say confidently, that something is wrong with the answer here. `f(x) = w(2^x)` is not right. In fact, `f(x) = Theta(2^x)`! See my answer with a proof. (Of course, the bug might be in my proof, but I am not handwaving anywhere, and each step is easily verifiable) – Knoothe Mar 24 '13 at 22:19
  • @HansLub, Now look at my answer, and tell me really using wolfarmalpha for proving limit was clever part or other part of the proof? – Saeed Amiri Mar 25 '13 at 09:45