Do the higher order of growth functions take the longest time? So, that O(n2) takes less time than (2n)?
Are the highest order of growth functions take the longest to actually run when N is a large number?
Do the higher order of growth functions take the longest time? So, that O(n2) takes less time than (2n)?
Are the highest order of growth functions take the longest to actually run when N is a large number?
The idea of Big O Notation is to express the worst case scenario of algorithm complexity. For instance, a function using a loop may be described as O(n) even if it contains several O(1) statements, since it may have to run the entire loop over n items.
n
is the number of inputs to the algorithm you are measuring. Big-O in terms of n tells you how that algorithm will perform as the number of inputs gets increasingly large. This may mean that it will take more time to execute, or that something will take more space to store. If you notice your algorithm has a high Big-O (like O(2^n) or O(n!)), you should consider an alternate implementation that scales better (assuming n will ever get large -- if n is always small it doesn't matter). The key take-away here is that Big-O is useful for showing you which of two algorithms will scale better, or possibly just the input size for which one algorithm would become a serious bottleneck on performance.
Here is an example image comparing several polynomials which might give you an idea of their growth rates in terms of Big-O. The growth time is as n approaches infinity, which in graphical terms is how sharply the function curves upwards along the y-axis as x grows large.
In case it is not clear, the x-axis here is your n
, and the y-axis the time taken. You can see from this how much more quickly, for instance, something O(n^2) would eat up time (or space, or whatever) than something O(n). If you graph more of them and zoom out, you will see the incredible difference in, say, O(2^n) and O(n^3).
Using your example of comparing two string arrays of size 20, let's say we do this (pseudocode since this is language agnostic):
for each needle in string_array_1:
for each haystack in string_array_2:
if needle == haystack:
print 'Found!'
break
This is O(n^2). In the worst case scenario, it has to run completely through the second loop (in case no match is found), which happens on every iteration of the first loop. For two arrays of size 20, this is 400 total iterations. If each array was incrased by just one string to size 21, the total number of iterations in the worst case grows to 441! Obviously, this could get out of hand quickly. What if we had arrays with 1000 or more members? Note that it's not really correct to think of n
as being 20 here, because the arrays could be of different sizes. n
is an abstraction to help you see how bad things could get under more and more load. Even if string_array_1 was size 20 and string_array_2 was size 10 (or 30, or 5!), this is still O(n^2).
O-time is only relevant when compared against itself, but 2^n
will grow faster than n^2
.
Compare as n grows:
N n^2 2^n
1 1 2
2 4 4
3 9 8
4 16 16
5 25 32
6 36 64
...
10 100 1024
20 400 1048576
Relevant Link: Wolfram Alpha
You can think in reverse: let an algorithm be such that T(n) = t
; how many more elements can I process in time 2.t
?
O(n^2)
-> 41% elements more ((n + 0.41 n)^2 = 2.n^2
)
O(2^n)
-> a single element more (2^(n+1) = 2.2^n
)
Or in time 1000000.t
?
O(n^2)
-> 1000 times more
O(2^n)
-> 20 elements more