-4

My algorithm class's homework claims that O(n3) is more efficient than O(2n).

When I put these functions into a graphing calculator, f(x)=2x appears to be consistently more efficient for very large n (starting from around n = 982).

Considering that for a function f(n) = O(g(n)), it must be smaller for all n greater than some n0, wouldn't this mean that from n = 982 we could say that O(2n) is more efficient?

coyote
  • 326
  • 2
  • 4
  • 11
  • 10
    A calculator is no substitute for actually learning math. – n. m. could be an AI Aug 06 '17 at 02:11
  • 5
    2^982 is about 40 billion billion billion billion billion billion billion billion billion billion [continues until 98 "billion"s]. 982^3 is about a billion. Would you rather something take a billion steps, or 40 billion billion billion billion billion...? – user2357112 Aug 06 '17 at 02:14
  • 4
    Yes it is. With a pencil and paper, draw a graph of N^3 versus 2^N. For large enough values of N. (Your calculator is probably screwing up because of overflow. 2^982 is too large for a typical calculator!) For extra points, prove it mathematically. (Hint: by induction ...) – Stephen C Aug 06 '17 at 02:16
  • 1
    (correction - 40 million billion billion billion [continues until 32 "billion"s]) – user2357112 Aug 06 '17 at 02:21
  • 4
    Can you elaborate on what your graphing calculator was showing you and how you drew this conclusion from it? – templatetypedef Aug 06 '17 at 03:29
  • 2
    @templatetypedef After checking on other calculators, there seems to be something wrong with the one I was using. Thanks. – coyote Aug 06 '17 at 03:52
  • 1
    @n.m. If you have any suggestions for specific points I should review, please let me know. Thanks! – coyote Aug 06 '17 at 03:53
  • 2
    2^10 is slightly greater than 1000. 2^30 is greater than 1000^3. 982^3 is less than 1000^3. 2^982 is vastly, hugely, mindbogglingly greater than 2^30. Since 1000=10^3, n=10 is about the point where 2^n takes over n^3 for good. If you don't have all of the above in your subconsciousness, you have a problem as a math/CS/programming student. Do whatever it takes to fix it. – n. m. could be an AI Aug 06 '17 at 04:43
  • 1
    If this question was asked in a way that made sense, it would probably be a duplicate of [Can an O(n) algorithm ever exceed O(n^2) in terms of computation time?](https://stackoverflow.com/questions/22594174/can-an-on-algorithm-ever-exceed-on2-in-terms-of-computation-time) – Bernhard Barker Aug 06 '17 at 16:24
  • All polynomial functions are more efficient (faster in performance) than exponential functions. – Ṃųỻịgǻňạcểơửṩ Jul 24 '18 at 17:52

2 Answers2

18

2^n grows extremely faster than n^3. Maybe you have input wrong values into your calculator or something like that. Also note that more efficient means less time which means a lower value on the y-axis.

Let me show you some correct plots for those functions (using Wolfram Alpha):

early course of 2^n and n^3

First 2^n is smaller (but just for a tiny range), after that you can see how 2^n grows beyond it.

After this intersection the situation never changes again and 2^n remains extremely greater than n^3. That also holds for the range you analysed, so > 982, like seen in the next plot (the plot for n^3 is near the x-axis):

2^n extremely greater than n^3


Also note that in the Big-O-Notation we always compare functions based on their growth. This is why something like O(n^3) does not contain functions f : f(x) <= n^3 but rather f : f(x) <= C * n^3 where C is an arbitrary constant, it could be big, it could be small. This accounts for the growth-factor in the comparison. Also note that it is allowed that the condition does not hold for a finite amount of x but there must exist some bound x' from where on the condition holds, so for every x > x'.

Compare this explanation to the complete mathematical definition from Wikipedia where C is k, x is n and x' is n_0:

definition of Big-O

Which defines, if true, that f(n) is in the set O(g(n)).

Zabuzard
  • 25,064
  • 8
  • 58
  • 82
  • 2
    As a note I'd like to add that this hidden constant factor `C` could be a reason why an algorithm that runs in `O(2^n)` indeed could be faster for inputs of a practical range than one running in `O(n^3)`. It could be that the second algorithm has an extreme overhead with a hidden constant `C = 1_000_000` or something very big. However `O-Notation` means: *"at some point, where inputs get really large, the `O(2^n)` will be slower than `O(n^3)`"*. But again, as said, it could be possible that the other algorithms runs faster for smaller inputs that could also exactly be all **relevant inputs**. – Zabuzard Aug 06 '17 at 03:44
2

You confuse O(2n) and 2n. O(2n) is actually C*2n, where C is an arbitrary chosen positive constant. Likewise, O(n3) is D*n3, where D is another arbitrary chosen positive constant. The claim "O(n3) is more efficient than O(2n)" means that, given any fixed C and D, it is always possible to find such n0 that for any n >= n0, D*n3 < C*2n.

DYZ
  • 55,249
  • 10
  • 64
  • 93