I am trying to find a good explanation to quickly understand Big O and Theta theory. I always feel an explanation can be given in a million different ways, and I guess I'm seeking that one explanation that finally makes sense. I know this is a n00b question but any help would be appreciated...
-
2Repost http://stackoverflow.com/questions/107165/big-o-for-eight-year-olds – XpiritO Mar 25 '10 at 16:31
3 Answers
One point of confusion is why something that you might think is O(2n), something that performs two operations for each item in a list, and something else you might think would be O(n) are actually both considered O(n)
The reason for this is that when you start working with huge data sets, the difference between O(2n) and O(n) is not really a huge difference when you consider how that compares to O(n^2) or O(log N). Consider if you had 3 different search algorithms that search a dataset. The dataset contains a million items. The number of operations each algorithm would take:
O(2n) = 2,000,000
O(n) = 1,000,000
O(n^2) = 1,000,000,000,000
The O(2n) is only 2 times as slow as O(n), but O(n^2) is a million times as slow. That's an incredibly massive difference! So Big O notation is really dealing with how an algorithm "scales", or in other words, how well it performs when you consider larger and larger data sets.
A O(n^2) algorithm will perform well for very tiny data sets, but with larger datasets its performance degrades rapidly. O(2n) and O(n) would degrade evenly and gradually, which is closer to what a user would expect if they were working with more data.
For this reason, people don't talk about O(2n), they just talk about O(n) since both would represent linear time(linear in that the number of operations increases evenly and gradually as you add data).
It can be frustrating at first to think that someone's algorithm that performs twice as slow would still be considered O(n), but big O notation is not a measure of relative speed. Big O notation is a measure of how an algorithm scales in respect to the amount of data it is processing.

- 37,329
- 20
- 143
- 202
As far as explanations go, StackOverflow folks seem to get a kick out of my telephone book example. You might also like Big O For Eight Year Olds.

- 1
- 1

- 303,634
- 46
- 339
- 357
Big O is an analysis tool used to compare algorithms. Big O is the upper bound on the function. Think of the upper bound as the maximum amount of time that the algorithm can take to run.
Big O often uses the variable n to denote the varying amount of elements in the algorithm. For example if you are performing an operation on every element of an array, n will denote the size of the array. You will need to perform the operation n times.
Programmers use this notation to have a common ground for speaking about how complicated an algorithm is and (as stated above) how well the algorithm scales (meaning how well it performs as n gets larger and larger).
Algorithms for computing the nth element of a fibonnaci sequence are very insightful for the purposes of Big O notation. Consider a recursive method for solving the nth element of the fibonnaci sequence:
fib(n) {
if (n == 0)
return 0;
else if (n == 1)
return 1;
else
return fib(n-1) + fib(n-2);
}
This is a simple implementation of Fibonacci with really slow running time for any n greater than say 100. We classify this algorithm as being O(2^n), which on a graph will increase at an exponential speed. Exponential is great when it is dealing with the money in your bank account but awful when it is dealing with algorithms.
A different implementation of Fibonacci can significantly speed up the algorithm.
fib(n) {
fibArr[n + 1];
fibArr[0] = 0;
fibArr[1] = 1;
for(int i = 2; i <= n; i++) {
fibArr[i] = fibArr[i-1] + fibArr[i-2];
}
return fibArr[n];
}
This second implementation of Fibonnaci has a Big O running time of O(n). This is what is called linear time. If you plot the implementations of Fibonnaci against each other on a graph you can see a huge difference in the running time of the exponential implementation as compared to the linear implementation.
Size - Linear - Exponential
1 1 2
10 10 1024
100 100 1.2676506e+30
1000 1000 1.071509e+301
Consider each of the above numbers as the amount of operations that need to be performed. If each operation takes 1 ms (just an estimate) you can start to guess how long an algorithm might take.
There is a lot more to Big O analysis. Consider these different types of classifications for Big O.
Constant time - O(1)
Logarithmic - O(logn)
Linear - O(n)
Quadratic - O(n^2)
Exponential - O(2 ^n)
Factorial - O(n!)
When choosing an algorithm it is important to understand the approximate running time and space needed to perform the algorithm.

- 2,461
- 1
- 20
- 22