10

I've read through so many resources and still am stuck with understanding what time complexity is. Resources I read were based on various formulas, I understood that O(n) is used to express time complexity, but I don't know how. Could anyone please explain this principle to me in an understandable clear way please.

nbro
  • 15,395
  • 32
  • 113
  • 196
Ilja
  • 44,142
  • 92
  • 275
  • 498

2 Answers2

36

Reference: How to Calculate Time complexity algorithm

I found a good article related to How to calculate time complexity of any algorithm or program

The most common metric for calculating time complexity is Big O notation. This removes all constant factors so that the running time can be estimated in relation to N as N approaches infinity. In general you can think of it like this:

statement;

Is constant. The running time of the statement will not change in relation to N.

for ( i = 0; i < N; i++ )
     statement;

Is linear. The running time of the loop is directly proportional to N. When N doubles, so does the running time.

for ( i = 0; i < N; i++ ) {
  for ( j = 0; j < N; j++ )
    statement;
}

Is quadratic. The running time of the two loops is proportional to the square of N. When N doubles, the running time increases by N * N.

while ( low <= high ) {
  mid = ( low + high ) / 2;
  if ( target < list[mid] )
    high = mid - 1;
  else if ( target > list[mid] )
    low = mid + 1;
  else break;
}

Is logarithmic. The running time of the algorithm is proportional to the number of times N can be divided by 2. This is because the algorithm divides the working area in half with each iteration.

void quicksort ( int list[], int left, int right )
{
  int pivot = partition ( list, left, right );
  quicksort ( list, left, pivot - 1 );
  quicksort ( list, pivot + 1, right );
}

Is N * log ( N ). The running time consists of N loops (iterative or recursive) that are logarithmic, thus the algorithm is a combination of linear and logarithmic.

In general, doing something with every item in one dimension is linear, doing something with every item in two dimensions is quadratic, and dividing the working area in half is logarithmic. There are other Big O measures such as cubic, exponential, and square root, but they're not nearly as common. Big O notation is described as O ( ) where is the measure. The quicksort algorithm would be described as O ( N * log ( N ) ).

Note that none of this has taken into account best, average, and worst case measures. Each would have its own Big O notation. Also note that this is a VERY simplistic explanation. Big O is the most common, but it's also more complex that I've shown. There are also other notations such as big omega, little o, and big theta. You probably won't encounter them outside of an algorithm analysis course. ;)

Edit:

Now the Question is how did the log n get into the equation:

  1. For each step, you invoke the algorithm recursively on the first and second half.
  2. Thus - the total number of steps needed, is the number of times it will take to reach from n to 1 if you devide the problem by 2 each step.

The equation is: n / 2^k = 1. Since 2^logn = n, we get k = logn. So the number of iterations the algorithm requires is O(logn), which will make the algorithm O(nlogn)

Also, big O notation gives us easy to calculate - platform independent approximation on how will the algorithm behave asymptotically (at infinity), which can divide the "family" of algorithm into subsets of their complexity, and let us compare easily between them.

You can also check out this Question for more reading: Time complexity of the program using recurrence equation

Community
  • 1
  • 1
Vinayak Pahalwan
  • 2,915
  • 4
  • 26
  • 34
  • 1
    Quicksort is only in average **O(N * log ( N ))**. Worst case is **O(N²)**. E.g. merge-sort, heap-sort have Worst case **O(N * log ( N ))**. But in real live Quicksort is still faster – MrSmith42 Apr 26 '13 at 09:17
  • 2
    Quick sort worst case time complexity occur when pivot produces two regions one of size 1 element and other of size (n-1) elements recursively.Whereas average case occurs when pivot chooses two regions such that both regions produced have a size of n/2. – Vinayak Pahalwan Apr 26 '13 at 09:19
  • Most time *Calculate Time complexity of an algorith* means: What is the worst case Time complexity. I only wanted to point out that **O(N * log ( N ))** is not the worst case complexity for quicksort, but there are sorting algorithms with this worst case complexity. There is no way to ensure to find 'good' pivots in quicksort. – MrSmith42 Apr 26 '13 at 10:16
  • @MrSmith42 yes and you were right there – Vinayak Pahalwan Apr 26 '13 at 10:17
  • @Vinayak So say if you have java application with lots of code similar to this: `public String getName(int idx) { return NAME_LIST[idx];}` being new to java, I assume these will be counted as simple statements? idx is number in this time, but I still don't understand how to apply the formula and calculate time complexity of such code? – Ilja Apr 26 '13 at 10:21
  • @Ilya A Java statement is the smallest unit that is a complete instruction...one with a semi colon at the end...for example *age=23;* is an assignment statement. Also, i cant say based on the code u posted, a better understanding of the code(if u can provide) is necessary to calculate the time complexity. i'll try my best to calculate the TC. – Vinayak Pahalwan Apr 26 '13 at 10:28
  • @Vinayak It's a big chunk of incomplete code, I think there is other file associated with it, but basically I can't figure out time complexity of this file http://pastebin.com/JQxZbx3f It's big, so I don't think it is a good idea to paste it here at stackoverflow. – Ilja Apr 26 '13 at 11:22
  • While specifying time complexity, should we take under consideration the internal operations done by java? For example, If we print a string char by char, then the time complexity will be O(n), but should we take under consideration the computation of length of string done by java which will be used in the for loop? – Tushar Banne Feb 06 '17 at 13:34
2

You should also read about Amortized Analysis to fully understand the notions of time complexity. Amortized analysis is used to have a worst-case bound for the performance of an algorithm by considering all the operations.

The link to the Wikipedia article is given below,

http://en.wikipedia.org/wiki/Amortized_analysis

Deepu
  • 7,592
  • 4
  • 25
  • 47