I know that the big-o complexity of the following equation is O(n^3)
4n^3 + 6n + 6
because the n^3 is the dominant term.
Would the big-o complexity be the same for the same function, but with a negative coefficient?
-4n^3 + 6n + 6
I know that the big-o complexity of the following equation is O(n^3)
4n^3 + 6n + 6
because the n^3 is the dominant term.
Would the big-o complexity be the same for the same function, but with a negative coefficient?
-4n^3 + 6n + 6
Actually if you have negative terms in a big-O computation, you can ignore them because they make you win time.
In this case, the complexity would be O(n)
.
Don't know to what kind of algorithm something like that could correspond though, but to answer the general question, you could have something like O(an^2 - bn)
which would give a O(n^2)
complexity.
Edit:
Found a funny related question, about time travelling in algorithm solving.
Formally we analyse monotonically increasing function. It's implied by formal definitions of asymptotic complexity.
Let's look at one of definitions, at wikipedia:
One writes
f(x) = O(g(x)) as x -> inf
if and only if there is a positive constant M such that for all sufficiently large values of x, f(x) is at most M multiplied by g(x) in absolute value. That is, f(x) = O(g(x)) if and only if there exists a positive real number M and a real number x0 such that
|f(x)| <= M|g(x)| for all x>x0
As you can see, this definition works on absolute values.
In some other sources (like books about data structures and algorithms) you might find definitions without absolute value, but somewhere assumption that analysed functions are monotonically increasing (warning: sometimes assumptions are hidden in book references or implied by properties of analysed universe).
To sum it up: asymptotic analysis are designed to be used on monotonically increasing functions. Sometimes it's enforced by assumption, sometimes by absolute value in equation.
You may find other agrugments like this another SO answer , but with same conclusion.