To be precise, O(...)
is a set of functions, not a number. Sometimes some guys are lazy and write x = O(y)
instead of x \in O(y)
.
You can find the exact definition of the set O(y)
in the column formal definition from this table: https://en.wikipedia.org/wiki/Big_O_notation#Family_of_Bachmann.E2.80.93Landau_notations
What does this informally mean? O(f(x))
contains functions that grow at about the same speed.
An example, if you have g(x) = n^2 + 5n - 7000
then it is in O(n^2)
since n^2
is the dominant part of the function (compare to the exact definitions).
Also, constant factors can be removed. So, g(x) = 1000n^2
is also in O(n^2)
.
Thus, O(...)
only is an indicator of which variables, and how much, something depends on. For example, there exists an input (it may be veeery big) where the function n^2
is bigger than 100000000 * n
.
Because of that, an algorithm that has a time complexity of O(n^2)
in general is not so good as one in O(n)
. But, which is very important, in practice it may be preferable as the hidden stuff in the second algorithm may be such big that the first algorithm is better for input sizes that occur in practice. However, there is a bound on the input size (it can be very large) where the second algorithm will get better, but they may not occur in practice though.