0

Do the higher order of growth functions take the longest time? So, that O(n2) takes less time than (2n)?

Are the highest order of growth functions take the longest to actually run when N is a large number?

Mohsen Kamrani
  • 7,177
  • 5
  • 42
  • 66
Jon
  • 161
  • 1
  • 2
  • 9
  • 1
    I think you can expect better answers in Math or other stackExchange forum than here, but will see. –  Mar 12 '14 at 04:02
  • possible duplicate of [Big Oh Notation - formal definition](http://stackoverflow.com/questions/2754718/big-oh-notation-formal-definition) – Mohsen Kamrani Mar 12 '14 at 04:12
  • @Jon, I have updated my answer to attempt to make my explanation more clear. Has this helped you? I can also pare it down if it is too much. :) – Two-Bit Alchemist Mar 12 '14 at 04:22
  • Please rephrase your second question. –  Mar 12 '14 at 07:23

3 Answers3

1

The idea of Big O Notation is to express the worst case scenario of algorithm complexity. For instance, a function using a loop may be described as O(n) even if it contains several O(1) statements, since it may have to run the entire loop over n items.

n is the number of inputs to the algorithm you are measuring. Big-O in terms of n tells you how that algorithm will perform as the number of inputs gets increasingly large. This may mean that it will take more time to execute, or that something will take more space to store. If you notice your algorithm has a high Big-O (like O(2^n) or O(n!)), you should consider an alternate implementation that scales better (assuming n will ever get large -- if n is always small it doesn't matter). The key take-away here is that Big-O is useful for showing you which of two algorithms will scale better, or possibly just the input size for which one algorithm would become a serious bottleneck on performance.

Here is an example image comparing several polynomials which might give you an idea of their growth rates in terms of Big-O. The growth time is as n approaches infinity, which in graphical terms is how sharply the function curves upwards along the y-axis as x grows large.

Comparison of several polynomials relevant to Big-O

In case it is not clear, the x-axis here is your n, and the y-axis the time taken. You can see from this how much more quickly, for instance, something O(n^2) would eat up time (or space, or whatever) than something O(n). If you graph more of them and zoom out, you will see the incredible difference in, say, O(2^n) and O(n^3).

Attempting a Concrete Example

Using your example of comparing two string arrays of size 20, let's say we do this (pseudocode since this is language agnostic):

for each needle in string_array_1:
    for each haystack in string_array_2:
        if needle == haystack:
            print 'Found!'
            break

This is O(n^2). In the worst case scenario, it has to run completely through the second loop (in case no match is found), which happens on every iteration of the first loop. For two arrays of size 20, this is 400 total iterations. If each array was incrased by just one string to size 21, the total number of iterations in the worst case grows to 441! Obviously, this could get out of hand quickly. What if we had arrays with 1000 or more members? Note that it's not really correct to think of n as being 20 here, because the arrays could be of different sizes. n is an abstraction to help you see how bad things could get under more and more load. Even if string_array_1 was size 20 and string_array_2 was size 10 (or 30, or 5!), this is still O(n^2).

Two-Bit Alchemist
  • 17,966
  • 6
  • 47
  • 82
  • I'm not quite understanding it. If I was writing a function, I would want to have it as close to square root of n as possible (the yellow line)? – Jon Mar 12 '14 at 04:10
  • It's more like if you have two separate implementations that might work for what you want, two different ways to write the same function, but one grows in time O(n) and the other O(n^2), you would definitely prefer the O(n) implementation. A practical example of this would be a function that uses a single loop, O(n), instead of nested loops, O(n^2). – Two-Bit Alchemist Mar 12 '14 at 04:13
  • So when functions are ordered by their highest order of growth, the highest ordered functions will take the longest time for large numbers of n? – Jon Mar 12 '14 at 04:22
  • What actually is n in this case? Let's say you were comparing the individual elements of a two string arrays whose size was 20, what would n be in this case? Also, what do you mean by trying to find the worst complexity? Maybe a more concrete example (with actual numbers) would help me out more. – Jon Mar 12 '14 at 04:25
  • It's difficult to come up with a good concrete example because there are so many things Big-O can measure, but I will try. It is also important to understand that only the largest term 'counts', because in the long term that will make the algorithm grow the fastest. Thus, O(n^3 + 6n^2 + 4n + 32) is considered to be O(n^3), because the cubic growth will outstrip the others over time. – Two-Bit Alchemist Mar 12 '14 at 04:32
  • Thanks for that example. So n^2 is the worst case scenario, but if we find what we are looking for on the first iteration and first element of the second loop, it would have taken O(1) time. So is the worst case what they call the upper bound or lower bound? – Jon Mar 12 '14 at 04:46
  • It is not that if we find something early it becomes O(1). The whole point of Big-O is to see how our algorithms will perform if things do not go our way. This algorithm is _always_ O(n^2), and if it breaks early we simply got lucky. ;) The worst case is the upper bound, because it is the _maximum_ time/space we will need. – Two-Bit Alchemist Mar 12 '14 at 04:50
0

O-time is only relevant when compared against itself, but 2^n will grow faster than n^2.

Compare as n grows:

N    n^2    2^n
1    1      2
2    4      4
3    9      8
4    16     16
5    25     32
6    36     64
...
10   100    1024
20   400    1048576

Relevant Link: Wolfram Alpha

Yos233
  • 2,396
  • 2
  • 15
  • 15
0

You can think in reverse: let an algorithm be such that T(n) = t; how many more elements can I process in time 2.t ?

O(n^2) -> 41% elements more ((n + 0.41 n)^2 = 2.n^2)

O(2^n) -> a single element more (2^(n+1) = 2.2^n)

Or in time 1000000.t ?

O(n^2) -> 1000 times more

O(2^n) -> 20 elements more