1

I know that the big-o complexity of the following equation is O(n^3)

4n^3 + 6n + 6

because the n^3 is the dominant term.

Would the big-o complexity be the same for the same function, but with a negative coefficient?

-4n^3 + 6n + 6

jsan
  • 1,047
  • 8
  • 20
  • 33
  • 1
    If $-4n^3+6n+6$ is really an upper bound for the running time (otherwise why care about its asymptotic growth?), it means that for large enough inputs the program is guaranteed to terminate before it was started! I'd like to see that program. – hmakholm left over Monica Oct 23 '12 at 21:33
  • Is that number (`-4n^3 + 6n + 6`) something related to the total number of operations? Does it make sense only for some particular values? (I mean, not for large `n`s). Because I suppose that you just want to compute `-4n^3 + 6n + 6`, which will be in `O(1)`. – ROMANIA_engineer Jan 27 '16 at 10:21

2 Answers2

2

Actually if you have negative terms in a big-O computation, you can ignore them because they make you win time.

In this case, the complexity would be O(n).

Don't know to what kind of algorithm something like that could correspond though, but to answer the general question, you could have something like O(an^2 - bn) which would give a O(n^2)complexity.

Edit:

Found a funny related question, about time travelling in algorithm solving.

Community
  • 1
  • 1
alestanis
  • 21,519
  • 4
  • 48
  • 67
  • It's not being applied to any function or algorithm, I'm just reading through my textbook and they give a few examples with positive coefficients on the highest term, but non with negative coefficients. – jsan Oct 23 '12 at 21:36
  • You will also find examples with negative middle terms, like the one I gave, but never with a dominant negative term. – alestanis Oct 23 '12 at 21:38
2

Formally we analyse monotonically increasing function. It's implied by formal definitions of asymptotic complexity.

Let's look at one of definitions, at wikipedia:

One writes

f(x) = O(g(x)) as x -> inf

if and only if there is a positive constant M such that for all sufficiently large values of x, f(x) is at most M multiplied by g(x) in absolute value. That is, f(x) = O(g(x)) if and only if there exists a positive real number M and a real number x0 such that

|f(x)| <= M|g(x)| for all x>x0

As you can see, this definition works on absolute values.

In some other sources (like books about data structures and algorithms) you might find definitions without absolute value, but somewhere assumption that analysed functions are monotonically increasing (warning: sometimes assumptions are hidden in book references or implied by properties of analysed universe).

To sum it up: asymptotic analysis are designed to be used on monotonically increasing functions. Sometimes it's enforced by assumption, sometimes by absolute value in equation.

You may find other agrugments like this another SO answer , but with same conclusion.

Community
  • 1
  • 1
Grzegorz Wierzowiecki
  • 10,545
  • 9
  • 50
  • 88