1

I really can't figure out what "Big-O" is and how to use it in practice, so i hope someone could give me a simple explaining and maybe a little programming example in java.

I have the following questions:

  • What does these terms mean (as simple as possible) and how is it used in java: BigO(1), BigO(n), BigO(n2) and BigO(log(n)) ?

  • How do you calculate a Big-O from an existing java code?

  • How do you use Big-O sorting
  • How do you use Big-O recursion

Hope someone will be able to help.

Thank you in advantage

Langkiller
  • 3,377
  • 13
  • 43
  • 72

3 Answers3

5

Big O is used to give an idea of how fast an algorithm will scale as input size increases

O(1) means that as input size increases, the running time will not change

O(n) means that as input size doubles, the running time will double, more or less

O(n^2) means that as input size double, the running time will quadruple, more or less

O(f(n)) means that as input size doubles, the running time will increase to around f(2n)

Regarding Big-O, sorting, and recursion.

Big-O sorting isn't really an algorithm. You can however use Big O to tell you how fast your sorting algorithm is.

For computing Big-O of a recursive function, I would recommend using the Master Theorem.

Guidelines for determining Big O:

Usually, you need to determine what your input size is (like array length, or number of nodes in your linked list, etc)

Then ask yourself, what happens if your input size doubles? Or triples? If you have a for loop that goes through each element:

//say array a has n elements
for (int i = 0; i < n; i++) {
    // do stuff 
    a[i] = 3;
}

Then doubling n would make the loop run twice as long, so it would be O(n). Tripling n would triple the time it takes, so the code scales linearly with the input size.

If you have a 2D array and nested for loops:

// we say that each dimension of the array is upper bounded by n,
for (int i = 0; i < n; i++) {
    for (int j = 0; j < n; j++) {
        // do stuff
        a[i][j] = 7;
    }
}

As n doubles, the code will take 2^2 = 4 times as long. If input size triples, code will take 3^2 = 9 times as long. So Big O is O(n^2).

jh314
  • 27,144
  • 16
  • 62
  • 82
2

The Big-O notation is a notation made for showing computer algorithm performance, when the input is very large.

Three quick programming examples, in Java:

O(1):

for (int i=0; i<3; i++) {
   System.out.println(i);
}

O(n):

int n = 1000000; /* Some arbitrary large number */    
for (int i=0; i<n; i++) {
   System.out.println(i);
}

O(n2):

int n = 1000000; /* Some arbitrary large number */    
for (int i=0; i<n; i++) {
   for (int j=0; j<n; j++) {
      System.out.println(i * j);
   }
}

Read more: http://en.wikipedia.org/wiki/Big_O_notation

Hidde
  • 11,493
  • 8
  • 43
  • 68
1

Big O (it is the letter O - which is big- as opposed to the letter o which is small) gives an idea of how an algorithm scales when the input size (n) changes. The numbers are

If for instance n doubles from say 100 to 200 items,

  • an O(1) algorithm will take approximately the same time.
  • an O(n) algortihm will take double the time.
  • an O(n^2) algorithm will take four times the time (2^2)
  • an O(n^3) algorithm will take eight times the time (2^3)

And so on.

Note that log(n) can be understood as "the number of digits in n". This means that if you from n having two digits (like 99) to n having the double number of digits (four digits like 9999) the running time only doubles. This typically happens when you split the data in two piles and solve each separately and merge the solutions back, for instance in sorting.

Typically each loop over the input data multiply by n. so a single loop is O(n) but if you need to compare every element to every other element you get O(n^2), and so on. Note that the times are relative so a slow O(n) algorithm may be outperformed by a fast O(n^2) algorithm for small values of n.

Also note that O is worst case. So quicksort which generally run fast still is O(^2) because there is pathetic input data which cause it to compare every element to every other.

This is interesting because most algorithms are fast for small data sets, but you need to know how they work with input data perhaps thousands or millions of times larger where it is important if you have O(n^2) or O(n^3) or worse. The numbers are relative so it does not say anything bps out if a given algorithm is slow or fast, just how the worst case looks like when you double the input size.

Thorbjørn Ravn Andersen
  • 73,784
  • 33
  • 194
  • 347