What is the difference between Big-O notation O(n)
and Little-O notation o(n)
?

- 8,029
- 10
- 53
- 79

- 7,171
- 6
- 28
- 28
5 Answers
f ∈ O(g) says, essentially
For at least one choice of a constant k > 0, you can find a constant a such that the inequality 0 <= f(x) <= k g(x) holds for all x > a.
Note that O(g) is the set of all functions for which this condition holds.
f ∈ o(g) says, essentially
For every choice of a constant k > 0, you can find a constant a such that the inequality 0 <= f(x) < k g(x) holds for all x > a.
Once again, note that o(g) is a set.
In Big-O, it is only necessary that you find a particular multiplier k for which the inequality holds beyond some minimum x.
In Little-o, it must be that there is a minimum x after which the inequality holds no matter how small you make k, as long as it is not negative or zero.
These both describe upper bounds, although somewhat counter-intuitively, Little-o is the stronger statement. There is a much larger gap between the growth rates of f and g if f ∈ o(g) than if f ∈ O(g).
One illustration of the disparity is this: f ∈ O(f) is true, but f ∈ o(f) is false. Therefore, Big-O can be read as "f ∈ O(g) means that f's asymptotic growth is no faster than g's", whereas "f ∈ o(g) means that f's asymptotic growth is strictly slower than g's". It's like <=
versus <
.
More specifically, if the value of g(x) is a constant multiple of the value of f(x), then f ∈ O(g) is true. This is why you can drop constants when working with big-O notation.
However, for f ∈ o(g) to be true, then g must include a higher power of x in its formula, and so the relative separation between f(x) and g(x) must actually get larger as x gets larger.
To use purely math examples (rather than referring to algorithms):
The following are true for Big-O, but would not be true if you used little-o:
- x² ∈ O(x²)
- x² ∈ O(x² + x)
- x² ∈ O(200 * x²)
The following are true for little-o:
- x² ∈ o(x³)
- x² ∈ o(x!)
- ln(x) ∈ o(x)
Note that if f ∈ o(g), this implies f ∈ O(g). e.g. x² ∈ o(x³) so it is also true that x² ∈ O(x³), (again, think of O as <=
and o as <
)

- 6,580
- 4
- 35
- 39

- 74,820
- 18
- 121
- 166
-
191Yes-- the difference is in whether the two functions may be asymptotically the same. Intuitively I like to think of big-O meaning "grows no faster than" (i.e. grows at the same rate or slower) and little-o meaning "grows strictly slower than". – Phil Sep 01 '09 at 20:38
-
23Copy this to wikipedia? This is much better that what's there. – cloudsurfin Nov 09 '14 at 04:42
-
@TylerMcHenry for little(o) would it be true if it is 2^n = o(3^n)? – S A Dec 28 '15 at 22:01
-
1@SA Yes. That's a trickier case where the simpler rule I gave about "higher powers of x" isn't obviously applicable. But if you look at the more rigorous limit definitions given in Strilanc's answer below, what you want to know is if lim n->inf (2^n / 3^n) = 0. Since (2^n / 3^n) = (2/3)^n and since for any 0 <= x < 1, lim n->inf (x^n) = 0, it is true that 2^n = o(3^n). – Tyler McHenry Jan 17 '16 at 05:02
-
Your definition do not seem correct to me. For example your big-O definition implies `x ~ O(x^2)` (with, e.g., a=k=1). – Emerald Weapon Oct 26 '16 at 20:02
-
1@EmeraldWeapon The definition given here doesn't imply x~O(x^2), it says x∈ O(x^2) and O(x) is a subset of O(x^2). – Cubic Apr 02 '17 at 10:18
-
`"f ∈ o(g) means that f's asymptotic growth is strictly slower than g's"` With this definition, `x ∈ o(2*x)`. – Eric Duminil Jan 11 '18 at 10:50
-
I had never thought of these two conditions as being so similar before, with the only difference being a "for all" versus an "exists". Thanks for that. – Patch Sep 17 '18 at 13:53
-
2Be careful with "In Little-o, it must be that there is a minimum x after which the inequality holds no matter how small you make k, as long as it is not negative or zero." It's not "for every `a` there is `k` that: ... ", it's "for every `k` there is an `a` that: ..." – GA1 May 15 '19 at 10:30
-
2"In Little-o, it must be that there is a minimum x after which the inequality holds no matter how small you make k, as long as it is not negative or zero." no, this is incorrect. – Filippo Costa Mar 04 '20 at 13:20
-
For remembering: The condition is stricter so the circle is smaller (big-O compared to little-o) – o0omycomputero0o Mar 17 '20 at 16:17
-
How to translate the above set-based version to how the o-notation is commonly used, like f(x) = x + o(x) ? – Daniel Oct 07 '22 at 19:59
-
Better understand set theory to formally express the idea of little/big O notation. – Ziqi Fan Nov 18 '22 at 16:33
-
@Daniel, O-notation is commonly used when Theta-notation as defined [here](https://en.wikipedia.org/wiki/Big_O_notation#Family_of_Bachmann%E2%80%93Landau_notations) is meant. Theta notation is discussed in the next answer as well. – Aelian Feb 08 '23 at 03:04
Big-O is to little-o as ≤
is to <
. Big-O is an inclusive upper bound, while little-o is a strict upper bound.
For example, the function f(n) = 3n
is:
- in
O(n²)
,o(n²)
, andO(n)
- not in
O(lg n)
,o(lg n)
, oro(n)
Analogously, the number 1
is:
≤ 2
,< 2
, and≤ 1
- not
≤ 0
,< 0
, or< 1
Here's a table, showing the general idea:
(Note: the table is a good guide but its limit definition should be in terms of the superior limit instead of the normal limit. For example, 3 + (n mod 2)
oscillates between 3 and 4 forever. It's in O(1)
despite not having a normal limit, because it still has a lim sup
: 4.)
I recommend memorizing how the Big-O notation converts to asymptotic comparisons. The comparisons are easier to remember, but less flexible because you can't say things like nO(1) = P.

- 6,580
- 4
- 35
- 39

- 17,763
- 5
- 68
- 136
-
I have one question: what's the difference between line 3 and 4 (limit definitions column)? Could you please show me one example where 4 holds (lim > 0), but not 3? – Masked Man Jan 21 '14 at 05:08
-
5Oh, I figured it out. Big Omega is for lim > 0, Big Oh is for lim < infinity, Big Theta is when both conditions hold, meaning 0 < lim < infinity. – Masked Man Jan 21 '14 at 05:16
-
For f ∈ Ω(g), shouldn't the limit at infinity evaluate to >= 1 ? Similarly for f ∈ O(g), 1=
– user2963623 Aug 22 '15 at 07:16 -
1@user2963623 No, because finite values strictly above 0, including values between 0 and 1, correspond to "same asymptotic complexity but different constant factors". If you omit values below 1, you have a cutoff in constant-factor space instead of in asymptotic-complexity space. – Craig Gidney Aug 22 '15 at 13:52
-
I get the intuition of what you're saying. So, does f ∈ O(g) (or Ω(g)) imply that g approximates the growth rate of complexity with input size rather than strictly a bound? – user2963623 Aug 22 '15 at 20:22
-
-
@isarandi No, you need some extra notion of the functions being well behaved for that. When some of the limits don't exist, your expression can fail. For example, `g(x) = x` and `f(x) = x*(1 + (-1)^x)`. It is not the case that f is asymptotically less-than g because f keeps bouncing up to g. It is the case that f is asymptotically at-most g because f doesn't bounce asymptotically higher than g. It is not the case that g is asymptotically at-most f because f keeps bouncing down to 0. So (f ∈ o(g)) is false while (f ∈ O(g) AND g NOT ∈ O(f)) is true. – Craig Gidney Feb 27 '18 at 18:16
-
@CraigGidney Thanks! Is it true for monotonically nondecreasing functions? – isarandi Feb 28 '18 at 12:43
-
@isarandi No. For example, make a function by drawing a curve that bounces between y=x and y=log(x). Do the x-to-log(x) bounce horizontally rightward, and the log(x)-to-x bounce vertically upward. – Craig Gidney Mar 01 '18 at 03:50
-
here is a better definition from MIT: http://web.mit.edu/broder/Public/asymptotics-cheatsheet.pdf – Ahmed Fwela Oct 31 '18 at 22:43
-
What does n^O(1) = P even mean, given O(1) is a set? How do you raise a variable to a power that is a set? – ubadub Dec 18 '18 at 07:03
-
@ubadub It means that the exponent is asymptotically constant with respect to the size of the problem. The running time has to look like n^c for some constant c. – Craig Gidney Dec 18 '18 at 18:24
-
@CraigGidney I figured that out (since you said it's equal to P), but I am not sure how that notation is valid/consistent if O(1) is generally defined as a set – ubadub Dec 18 '18 at 22:59
-
1@ubadub You broadcast the exponentiation operation over the set. It's loose notation. – Craig Gidney Dec 18 '18 at 23:04
-
@CraigGidney Do I understand it correctly, that if f ∈ o(g) then f ∈ O(g)? So therefore o(g) < O(g)? – Quotenbanane Apr 17 '21 at 09:56
I find that when I can't conceptually grasp something, thinking about why one would use X is helpful to understand X. (Not to say you haven't tried that, I'm just setting the stage.)
Stuff you know: A common way to classify algorithms is by runtime, and by citing the big-Oh complexity of an algorithm, you can get a pretty good estimation of which one is "better" -- whichever has the "smallest" function in the O! Even in the real world, O(N) is "better" than O(N²), barring silly things like super-massive constants and the like.
Let's say there's some algorithm that runs in O(N). Pretty good, huh? But let's say you (you brilliant person, you) come up with an algorithm that runs in O(N⁄loglogloglogN). YAY! Its faster! But you'd feel silly writing that over and over again when you're writing your thesis. So you write it once, and you can say "In this paper, I have proven that algorithm X, previously computable in time O(N), is in fact computable in o(n)."
Thus, everyone knows that your algorithm is faster --- by how much is unclear, but they know its faster. Theoretically. :)
-
1Very interesting comment. How to wield with f(n) = n/log(log(log(log(n))))? if exponent is 8 and base is 2; 1) log8 == 3, 2) log3 = 1.5, 3) log(1.5) = 0.5 4) log(0.6) = - 0.7; First it is a negative value, second dividing by a small fraction would increase quotient. It is unclear how this O(N⁄loglogloglogN) works. – Dmitry Dmitriev Oct 04 '21 at 07:32
-
@DmitryDmitriev Good catch. A better example would be O(√N) represented as o(N) – captain Nov 10 '21 at 05:26
In general
Asymptotic notation is something you can understand as: how do functions compare when zooming out? (A good way to test this is simply to use a tool like Desmos and play with your mouse wheel). In particular:
f(n) ∈ o(n)
means: at some point, the more you zoom out, the moref(n)
will be dominated byn
(it will progressively diverge from it).g(n) ∈ Θ(n)
means: at some point, zooming out will not change howg(n)
compare ton
(if we remove ticks from the axis you couldn't tell the zoom level).
Finally h(n) ∈ O(n)
means that function h
can be in either of these two categories. It can either look a lot like n
or it could be smaller and smaller than n
when n
increases. Basically, both f(n)
and g(n)
are also in O(n)
.
I think this Venn diagram (adapted from this course) could help:
It's the exact same has what we use for comparing numbers:
In computer science
In computer science, people will usually prove that a given algorithm admits both an upper O
and a lower bound . When both bounds meet that means that we found an asymptotically optimal algorithm to solve that particular problem
Θ
.
For example, if we prove that the complexity of an algorithm is both in O(n)
and (n)
it implies that its complexity is in Θ(n)
. (That's the definition of Θ
and it more or less translates to "asymptotically equal".) Which also means that no algorithm can solve the given problem in o(n)
. Again, roughly saying "this problem can't be solved in (strictly) less than n
steps".
Usually the o
is used within lower bound proof to show a contradiction. For example:
Suppose algorithm
A
can find the min value in an array of sizen
ino(n)
steps. SinceA ∈ o(n)
it can't see all items from the input. In other words, there is at least one itemx
whichA
never saw. AlgorithmA
can't tell the difference between two similar inputs instances where onlyx
's value changes. Ifx
is the minimum in one of these instances and not in the other, thenA
will fail to find the minimum on (at least) one of these instances. In other words, finding the minimum in an array is in(n)
(no algorithm ino(n)
can solve the problem).
Details about lower/upper bound meanings
An upper bound of O(n)
simply means that even in the worse case, the algorithm will terminate in at most n
steps (ignoring all constant factors, both multiplicative and additive). A lower bound of (n)
is a statement about the problem itself, it says that we built some example(s) where the given problem couldn't be solved by any algorithm in less than n
steps (ignoring multiplicative and additive constants). The number of steps is at most n
and at least n
so this problem complexity is "exactly n
". Instead of saying "ignoring constant multiplicative/additive factor" every time we just write Θ(n)
for short.

- 8,873
- 4
- 45
- 60
The big-O notation has a companion called small-o notation. The big-O notation says the one function is asymptotical no more than
another. To say that one function is asymptotically less than
another, we use small-o notation. The difference between the big-O and small-o notations is analogous to the difference between <= (less than equal) and < (less than).

- 2,186
- 1
- 13
- 27