8

Possible Duplicate:
Plain English explanation of Big O

I have been doing programing for 4 years now, but i never paid attentions to what time complexity is closely. i have a interview tomorrow and i know they will be asking me questions on it.Can anyone help me understand time complexity in simple terms with example?By looking at the code how can we decide if its complexity is O(n) or O(log n) O(n) etc?

Community
  • 1
  • 1
mindtree
  • 233
  • 3
  • 5
  • 8

2 Answers2

19

Here is a general suggestion:

If there is a single iteration, and the iterative variable is incrementing linearly then it's O(n) e.g.

for(i=0;i<n;i++) //O(n)
for(i=0;i<n;i = i + 4) // still O(n)

If the iterative variable is incremented geometrically, then it's O(log n)

e.g

for(i=1;i<n;i = i * 2) //O(log n)

Note that, the implementations don't have to be using loops, they maybe implemented using recursion.

If there is nested loop, where one has a complexity of O(n) and the other O(logn), then overall complexity is O(nlogn);

e.g

for(i=0;i<n;i++) // O(n)
 for(j=1;j<n;j=j*3) // O(log n)
//Overall O(nlogn)

This is only a finger cross guideline. In general, you have to have good concept to derive the complexity. That is why they are asked, to test your analytical ability and concept.

Shamim Hafiz - MSFT
  • 21,454
  • 43
  • 116
  • 176
  • nice examples, i have one doubt : in the 2 example i=i*2 , but in third one j= j*3, then how come both have complexity of O(logn) – mindtree Feb 04 '11 at 12:55
  • 2
    That's the beauty, they both have logn. Though we talk of logn, with base 2,but any base can be approximated to be logn. – Shamim Hafiz - MSFT Feb 04 '11 at 13:05
11

You're moving into a complex topic here ;-) At university, you spend ages on the theory behind the O-notation. I always tended to come down to the following simplification:

An algorithm that does not contain any loops (for example: Write Text to Console, Get Input From User, Write Result To Console) is O(1), no matter how many steps. The "time" it takes to execute the algorithm is constant (this is what O(1) means), as it does not depend on any data.

An algorithm that iterates through a list of items one by one has complexity O(n) (n being the number of items in the list). If it iterates two times through the list in consecutive loops, it is still O(n), as the time to execute the algorithm still depends on the number of items only.

An algorithm with two nested loops, where the inner loop somehow depends on the outer loop, is in the O(n^x) class (depending on the number of nested loops).

An binary search algorithm on a sorted field is in the O(log(n)) class, as the number of items is reduced by half in every step.

The above may not be very precise, but this is how I tried to remember some of the important values. From just looking at code, determining the complexity is not always easy.

For further and more detailed (and more correct) reading, the question David Heffernan linked to in his comment seems quite suitable.

Thorsten Dittmar
  • 55,956
  • 8
  • 91
  • 139
  • It's very easy to fall into traps if you think about it like that though. For example, what's the complexity of `print sqrt(pi)`? People too often consider library functions to be `O(1)`. – IVlad Feb 04 '11 at 13:03