I understand the principles of asymptotic notation, and I get what it means when something is O(1) or O(n2) for example. But what does O(log n) mean? or O(n log n) for example?

- 181,706
- 17
- 308
- 431

- 9,917
- 22
- 59
- 87
-
possible duplicate of [What would cause an algorithm to have O(log n) complexity?](http://stackoverflow.com/questions/9152890/what-would-cause-an-algorithm-to-have-olog-n-complexity) – templatetypedef Feb 08 '12 at 19:15
-
that's a lotta downvotes! and there is a CLEAR difference between 'what does log mean?' and 'what kinds of algorithms have O(log n) complexity.' – Dollarslice Feb 08 '12 at 19:21
-
@templatetypedef Not an exact duplicate; that question seems to implicitly assume knowledge of what a logarithm actually is and focuses on types of problems exhibiting particular asymptotic behaviors. – Michael McGowan Feb 08 '12 at 19:24
-
@SirYakalot- if you look at my answer to that other question, it goes through several different examples of algorithms with logs in their complexity and shows what high-level structures result in logarithms arising. I think it's the answer you're looking for. – templatetypedef Feb 08 '12 at 19:25
-
maybe it does answer this one to come extent, but this still isn't a duplicate. – Dollarslice Feb 08 '12 at 19:28
-
@SirYakalot: to be fair, you didn't ask "what does log mean?", you asked "what does O(log n) mean?". Since understanding logarithms is much more common than understanding asymptotic complexity -- and "what does log mean" is easily googled -- it was reasonable to assume your question was meant in templatetypedef's sense. – DSM Feb 08 '12 at 19:33
-
actually I asked - "What does 'log' represent in asymptotic notation?" - and I did google. You guys are much better at explaining things than all those rubbish math sites. But I get your point. Feel free to edit. – Dollarslice Feb 08 '12 at 19:41
3 Answers
Log is short for "logarithm": http://en.wikipedia.org/wiki/Logarithm
Logarithms tell us for example how many digits are needed to represent a number, or how many levels a balanced tree has when you add N elements to it.

- 108,737
- 14
- 143
- 193
-
-
4O(log n) is the set of functions of `n` that are bounded by `A*log(n)`, for some constant `A`, as n grows without limit. So no, it's not a number, but it's not such an abstract concept either. You have to understand what "log" means to be able to discuss O(log n) meaningfully. – Joni Feb 08 '12 at 19:28
Check: en.wikipedia.org/wiki/Big_O_notation
Remeber that log increases slowly than a an exponential function. So, if you have an algorithm that is n^2 and other, that doing the same, has a logarithmic function, the last would be more efficient (in general term, not always!).
To evaluate the complexity of a function (or algorithm) you must take in consideration the execution in time and space, mainly. You can evaluate a function or algorithm with other parameters, but, initially, those two would be OK.
EDIT: http://en.wikibooks.org/wiki/Data_Structures/Asymptotic_Notation
Also, check the sorting algorithms. Will give great insight about complexity.
-
so, like bisection search that increases 'the oposite of exponentially' - ah, is that what it means. logarithmically == the oposite of exponentially? – Dollarslice Feb 08 '12 at 19:22
-
2@SirYakalot : indeed, they have an **inverse** relations. But when applied in the context of algorithms, the relation is **not** that trivial. Is not like you can make the **same** algorithm in two different ways and get one with **exponential** measure and the other way **logarithmic** (at least, I'm not aware of) (check the **sorting** ones and the **search** ones, you will see interesting things) – Feb 08 '12 at 19:24
log is a mathematical function. It is the inverse of exponentiation - log (base 2) of 2^n is n. In practice, it is better than n^c for any positive c (including fractional c such as 1/2 (which is square root)). Check wikipedia for more info.

- 3,199
- 17
- 16
-
-
2@SirYakalot it is the opposite of exponentiation (ie 2^n). It is not the opposite of n^2 (for this, you want sqrt(n)). With exponentiation, the time it takes doubles (or gets multiplied by some other number > 1) with each additional element we process. Here, every unit of time doubles the amount data you can process. Think a binary search - one additional run of the loop doubles the size of the input that we can process. – Retief Feb 08 '12 at 19:22