int sum = 0;
for( int i = 0; i <= n*n; i = i*2 ){
sum++ ;
}
As it stands, this is an infinite loop which will never stop, since i
is never changing.
As complexity is defined for only Algorithms, which by definition should terminate in finite amount of time, it is undefined for this snippet.
However, if you change the code to the following :
int sum = 0;
for( int i = 1; i <= n*n; i = i*2 ){
sum++ ;
}
We can analyze the complexity as follows:
Let the loop run k - 1
times, and terminate at kth
updation of i
.
Since it's better to be redundant than to be unclear, here is what is happening:
Init(1) -> test(1) -> Loop(1) [i = 1]->
Update(2) -> test(2) -> Loop(2) [i = 2]->
...
Update(k - 1) -> test(k - 1) -> Loop(k - 1) [i = 2 ^ (k - 2)] ->
Update(k) -> test(k)->STOP [Test fails as i becomes 2 ^ (k - 1)]
Where Update(k)
means kth
update (i = i * 2)
.
Since, the increments in i
are such that in the pth
loop (or equivalently, after pth updation
), the value of i
will be 2 ^ (p - 1)
, we can say that at termination:
2 ^ (k - 1) > (n * n)
In verbose, we have terminated at the kth updation. Whatever the value of i
was, it would've been greater than (n * n)
or we would have gone for the kth
loop. Taking log base 2
on both sides:
k ~ 2 * log(n)
Which implies that k
is O(log(n))
.
Equivalently, the number of times the loop runs is O(log(n))
.
You can easily extend this idea to any limit (say n*n*n) and any increments (i*3, i*4 etc.)
The Big O
complexity will be unaffected by using <
instead of <=