1

This was a question the professor asked. No other student I talked to knew how to answer this. Part of this is due to how weird this question is, and also because he pretty much explained nothing about complexity yet. Can you help us out here?

John Sage
  • 75
  • 5
  • 7
    Maybe for `n`s for which 2^n < n^5. – Alin Purcaru Feb 16 '16 at 16:15
  • 1
    Another possible factor is if the constant term of the n^5 is something like 10^10^10^10^10, then that would open up a lot more cases – Untitled123 Feb 16 '16 at 16:17
  • 1
    Possible duplicate of [Are there any cases where you would prefer a higher big-O time complexity algorithm over the lower one?](http://stackoverflow.com/questions/34179968/are-there-any-cases-where-you-would-prefer-a-higher-big-o-time-complexity-algori) – Salvador Dali Feb 16 '16 at 20:11
  • 1
    Consider using the big-O notation in the title of your question: "_In which case would an algorithm of complexity O(2^n) be used over a O(n^5)?_" – Dima Chubarov Feb 17 '16 at 09:51

3 Answers3

2

https://www.wolframalpha.com/input/?i=plot+2%5En+and+n%5E5

According to this plot, 2n is better when ~1.2 < n < ~22.5.

ConditionRacer
  • 4,418
  • 6
  • 45
  • 67
  • You should also evaluate the constant factor. One algorithm might have a much larger constant factor than the other, impacting the threshold value. – chqrlie Feb 16 '16 at 22:02
  • OP didn't mention a constant factor, just two algorithms with given complexities – ConditionRacer Feb 17 '16 at 18:31
  • 2
    constant factors are implicit in complexity notations. For a sufficiently large `n`, `alpha*n^5` will beat `beta*2^n`, but you cannot compute `n` by just solving the equation `2^n > n^5`, you must first determine alpha and beta. Look at BJMyers's answer. – chqrlie Feb 17 '16 at 21:42
2

Algorithmic complexity is typically described in "big-O" notation, which deals with the number of operations an algorithm requires as its input grows toward infinity. This allows for the following assumptions:

  • The contribution of smaller terms (e.g. n^4 in an n^5 algorithm) can be ignored.
  • The contribution from scalar factors can be ignored.

For example, an algorithm with complexity 2n^5 + 3n^4 + n^2 would have big-O complexity of O(n^5) - the scalar factor and smaller terms are ignored.

When n is small, or when scalar factors are very large, these assumptions break down. An algorithm with smaller big-O complexity may have large scalar factors or non-trivial lesser terms that make the algorithm more expensive for small input values.

BJ Myers
  • 6,617
  • 6
  • 34
  • 50
1

The Big-O Notation is just a rough approximation, which is there to illustrate the complexity of the algorithm for rising "n". It might be very inaccurate.

For example:

2^(n)/(n^10) would still be O(2^n) and n^5+n^4+n^3 would still be O(n^5). In this case, for small n, it is obviously preferable to choose the algorithm with complexity O(2^n).

Overholt
  • 867
  • 1
  • 10
  • 23
  • 1
    Big-O notation is not "inaccurate". Perhaps people are sometimes confused on what it means and how to use it, but it is not *per se* inaccurate. – Gordon Linoff Feb 16 '16 at 16:22
  • @NiklasB.: Yes you are right, I changed it. Thank you – Overholt Feb 16 '16 at 16:27
  • 1
    @Overholt, for the first example, though what you say is mathematically correct, the big-O will still be represented as `O(2^n/n^10)` in practice. The point of any big-O representation is to give a **tight** upper bound. Of course one could just dismiss all the above and say `O(2^n)` for all polynomial algorithms and sorting but nobody ever does that as it renders the representation useless. – user1952500 Feb 16 '16 at 17:23