0

Consider this fragment of code

int sum = 0;
for( int i = 1; i <= n*n; i = i*2 ){
   sum++ ;
}

How to do a quick proper analysis for it to get order of growth of the worst case running time?

How does changing the increment statement to i = i * 3 instead of i = i * 2 changes the worst case running time?

And is our analysis affected by changing comparison operator to < instead of <= ?

axiom
  • 8,765
  • 3
  • 36
  • 38
iartist93
  • 315
  • 5
  • 18
  • i don't understand your question... Are you asking what the loop is doing and how it would behave if you replace i*2 by i*3 and < by <= ? by the way... we're not here to do your homework... – Kraal Sep 21 '14 at 18:12
  • possible duplicate of [Plain English explanation of Big O](http://stackoverflow.com/questions/487258/plain-english-explanation-of-big-o) – Whymarrh Sep 21 '14 at 18:16
  • i want to get the order of growth of the worst case running time – iartist93 Sep 21 '14 at 18:18

3 Answers3

4

Actualy this loop is infinte loop.

i=0
i=i*2  //0*2=0

So this loop will never end. Make i=1 to get the count of powers of 2 till n^2 not sum.

Pratyush Khare
  • 689
  • 5
  • 16
4
int sum = 0;
for( int i = 0; i <= n*n; i = i*2 ){
   sum++ ;
}

As it stands, this is an infinite loop which will never stop, since i is never changing. As complexity is defined for only Algorithms, which by definition should terminate in finite amount of time, it is undefined for this snippet.

However, if you change the code to the following :

int sum = 0;
for( int i = 1; i <= n*n; i = i*2 ){
   sum++ ;
}

We can analyze the complexity as follows:

Let the loop run k - 1 times, and terminate at kth updation of i. Since it's better to be redundant than to be unclear, here is what is happening:

Init(1) -> test(1) -> Loop(1) [i = 1]->
Update(2) -> test(2) -> Loop(2) [i = 2]->
...
Update(k - 1) -> test(k - 1) -> Loop(k - 1) [i = 2 ^ (k - 2)] ->
Update(k) -> test(k)->STOP [Test fails as i becomes 2 ^ (k - 1)]

Where Update(k) means kth update (i = i * 2).

Since, the increments in i are such that in the pth loop (or equivalently, after pth updation), the value of i will be 2 ^ (p - 1), we can say that at termination:

2 ^ (k - 1) > (n * n)

In verbose, we have terminated at the kth updation. Whatever the value of i was, it would've been greater than (n * n) or we would have gone for the kth loop. Taking log base 2 on both sides:

 k ~ 2 * log(n)

Which implies that k is O(log(n)).

Equivalently, the number of times the loop runs is O(log(n)).

You can easily extend this idea to any limit (say n*n*n) and any increments (i*3, i*4 etc.)

The Big O complexity will be unaffected by using < instead of <=

axiom
  • 8,765
  • 3
  • 36
  • 38
-1

for any loop, to analys it. u have to see 2 things. the condition that will make it exit, and the iteration applied to the variable used in the condition..

for your code. u can notice that the loop stops when i goes from 0 to n*n (n^2). and the variable i is increasing like i = i*2. as i is increasing i in this manner, the loop would run for log (n^2) times. this you can see by taking an example value of n^2, like 128, and then iterate it manually one by one.

Haris
  • 12,120
  • 6
  • 43
  • 70