That is the big O notation and an order of efficiency of algorithms:
O(1)
, not O(100)
- constant time - whatever the input, the algorithm executes in constant time
O(log(n))
- logarithmic time - as input gets larger, so will the time, but by a decreasing amount
O(n*log(n))
- linear * logarithmic - increases larger than linear, but not as fast as the following
O(n^2)
, or generally O(n^k)
where k is a constant - polynomial time, probably the worst of feasible algorithms
There are worse algorithms, that are considered unfeasible for non-small inputs:
This notation is orientative. For example, some algorithms in O(n^2)
can perform, on average, faster than O(n*log(n))
- see quicksort.
This notation is also an upper bound, meaning it describes a worst case scenario.
It can be used for space complexity or time complexity, where n
is the size of the input provided.