“Formally, O-notation expresses the upper bound of a function within a constant factor. Specifically, if g(n) is an upper bound of f(n), then for some constant c it is possible to find a value of n, call it N, for which any value of n ≥ N will result in f(n) ≤ cg(n).”
This is a quote from a textbook I’m reading on algorithms. By this definition, O-notation would be meaningless?
If the running time of some algorithm is T(n) = 5n+10 then T(n) is O(n). But it is also O(n^2) and O(2^n), etc.
f(n) ≤ g(n)
5n+10 ≤ n^2 for n>6
5n+10 ≤ 2^n for n>5
I understand the concept of worst-case analysis. It just seems the notation has barely any link to algorithm complexity? It is very confusing.