0
1) int p = 0;
2) for (int i = 1; i < n; i*=2) p++;
3) for (int j = 1; j < p; j*=2) stmt;

In my analysis, line #1 O(1), line #2 O(lg(n)), and line #3 O(lg(p)). I believe that the second and third lines are independent. Therefore, the asymptotic time complexity should be O(lg(n) + lg(p)). By the way, the lecturer said O(lglg(n)) because of p = lg(n). At this point, I have three questions.

  1. How does the second line relate to the third line? Could you please explain it in detail with some examples? I don't understand how p = lg(n) is available.
  2. O(lg(n) + lg(p)) is wrong? Would you please explain if I am wrong?
  3. If the complexity in my second question is correct, I don't understand O(lglg(n)) can be answer because I think O(lg(n) + lg(p)) > O(lglg(n)).

Please comment if you could not catch my question point.

Paul Hankin
  • 54,811
  • 11
  • 92
  • 118
Jinchul
  • 39
  • 4
  • 2
    You might get more attention to your question if you decide on a language (my guess is C) and tag it. Also, people can think easier if the code looks compilable. For that I would turn the numbers into comments. – RubberBee Jun 10 '20 at 12:08
  • variable p increment is happening inside 2nd for loop. that is the relation between 3rd and 2nd loop. – Natsu Jun 10 '20 at 12:42
  • `p` is not an input variable, so can't appear in the time complexity of the overall program. It's true that line 3 is O(lg p). – Paul Hankin Jun 10 '20 at 13:02
  • @PaulHankin, Would you please give us a more detailed reason why you thought the time complexity should be O(lg p)? Natsu, Dark, and Jacob answered the complexity will be O(loglog(n)). – Jinchul Jun 11 '20 at 07:45
  • Just considering line 3 alone, it's O(lg p). Since in the whole program context at this point p=floor(lg(n)), it's also O(lg lg n) – Paul Hankin Jun 11 '20 at 11:36

2 Answers2

1

It can be shown that p will be O(log n) after line 2 is finished. Therefore, the overall time complexity is O(O(stmt) * log(p) + log(n)) and since we know p, this can be reduced to O(O(stmt) * log(log(n)) + log(n)). I assume stmt is O(1), so the real runtime would be O(log(n) + log(log(n))). This can be further reduced to O(log(n)) since it can be shown that for any non-trivial n, log(n) > log(log(n)).

Why is p O(log n)? Well, consider what p evaluates to after line 2 is complete when n is 2, 4, 8, 16. Each time, p will end up being 1, 2, 3, 4. Thus, to increase p by one, you need to double n. Thus, p is the inverse of 2^n, which is log(n). This same logic must be carried to line 3, and the final construction of the runtime is detailed in the first paragraph of this post.

1

As of your question I made this c program and try to do the complexity analysis step by step so you can understand:

#include<stdio.h>

int main(){
//-----------------------------------//
//------------first line to analysis-------------//
//O(1) as of input size siz(p)=1

    int p = 0;
    int i=1,j=1,n=100;
//-----------------------------------//
//-----------second line to analysis---//
//O(log(n)) as of input size siz(loop1)=n

    for(i=1;i<n;i=i*2)
        printf("%d",i);
//---------------------------------//
//-------------third line to analysis---//
//O(log(p)) as of input size siz(loop2)=p
//we get O(log(n)) if we assume that input size siz(loop2)=p=n

    for(j=1;j<p;j=j*2)
        printf("%d",j);

 }

As of first line there is one variable p and it can take only one input at a time,so the time complexity is constant time. we can say that int p = 1 is O(1) and we take the function f(n)=O(1).

After that we have the first loop and it increases in a logarithmic scale like log with a base of 2,so it will be O(log(n)) as of input size is dependent on variable n.

so the worst case time complexity is now f(n) = O(1)+O(log(n)).

in third case it is same as second loop so we can say that time complexity is O(log(p)) as of input size is p and the 3rd line of code or 2nd loop is always independent part of the source code.if it will be a nested loop then it will depend on the first loop.

so the time complexity now f(n) = O(1)+O(log(n))+O(log(p))

Now we the time complexity formula and need to choose the worst one from this.

**O(LogLogn) Time Complexity of a loop is considered as O(LogLogn) if the loop variables is reduced / increased exponentially by a constant amount.

// Here c is a constant greater than 1   
for (int i = 2; i <=n; i = pow(i, c)) { 
   // some O(1) expressions
}
//Here fun is sqrt or cuberoot or any other constant root
for (int i = n; i > 0; i = fun(i)) { 
   // some O(1) expressions
}

so by the reference of ** mark we can easily understand that the time complexity will be O(log(log(n)) if the input size of p = n.This is the answer of your 3rd question.

reference: time complexity analysis

Dark debo
  • 69
  • 11