Why are constants ignored in asymptotic analysis?
-
See [Big-O for Eight Year Olds](http://stackoverflow.com/questions/107165/big-o-for-eight-year-olds) – Sven Marnach Nov 11 '10 at 00:47
-
possible duplicate of [Plain English explanation of Big O](http://stackoverflow.com/questions/487258/plain-english-explanation-of-big-o) – casablanca Nov 11 '10 at 01:14
3 Answers
Constant factors are ignored because running time and memory consumption (the two properties most often measured using the O-notation) are much harder to reason about when considering constant factors.
If we define U( f(n) )
to be the set of all function g
for which there exists an N
such that for all N > n: g(n) <= f(n)
(i.e. the same as O
without the constant factor), it is much harder to show that an algorithm's running time is in U( f(n) )
than O( f(n) )
.
For one thing, we'd need an exact unit for measuring running time. Using a CPU instruction as the basic unit would work, but that'd depend on the exact implementation of the algorithm as well as the processor architecture it runs on.
It's similar for memory consumption: different implementations of the same algorithm will differ in their memory consumption (by a constant factor). Further if an implementation uses a lot of pointers, the same implementation will use about twice as much memory on a 64-bit machine than on a 32-bit machine. But saying things like "this algorithm's memory consumption, when implemented using this C-code, is in O(23 * n)
on a 32-bit Intel PC. It's in O(42 * n)
on a 64-bit PC" is just not useful.
Ignoring constants allows us to reason about the properties of an algorithm in an implementation- and platform-independent manner.

- 363,768
- 54
- 674
- 675
It's because of the linear speedup theorem for Turing machines.
If you show me a Turing machine that solves a problem of size n in f(n) steps, and specify a constant c > 0, I can make a Turing machine that solves the same problem in c f(n) steps (or one step, if c f(n) < 1). For example, by taking c = ½ my machine can solve the same problem in half as many steps. Or, by taking c = 1/1000000, my machine can solve the same problem in only a millionth as many steps!
This result makes constant factors uninteresting (theoretically speaking: obviously in practice they still have some interest).

- 64,967
- 9
- 133
- 163
If you are talking about this
http://www.cs.cornell.edu/courses/cs312/2004fa/lectures/lecture16.htm
When you analyze the running time (or some other aspect) of an algorithm and find that it is something like
n ^ 2 + k
Then, when you are figuring out the big-O performance time -- it makes no sense to look at k, because you want to know the running time as n is getting large. n ^ 2
is so much larger than k -- that you can safely ignore it.

- 87,846
- 14
- 132
- 192
-
However `n ^ 2` vs. `100 * n ^ 2` does make something of a difference, practically speaking, even for large `n`, but they're both still in `O(n^2)`. – sepp2k Nov 11 '10 at 00:54
-
When do you ever have a constant 100? That means you've manually typed 100 of the same lines of code. If you're doing a `for x in range(100): O(n^2) operation`, I would call that n^3 – Falmarri Nov 11 '10 at 01:03
-
@Falmarri: It only means that if we're measuring lines of code (which is an unusual thing to measure with big O). When talking about running time, the slower implementation might very well be implemented using less lines of code. – sepp2k Nov 11 '10 at 01:12
-
@Falmarri: As to when you'd encounter such constant factors, here's a trivial example: You're looping through an array and adding a certain value to each element. That value takes 100 steps to calculate. In one implementation you're calculating the value once before the loop and in another you're calculating it in each iteration. Both implementation will have a running time in `O(n)`, but the latter will be slower by a factor of about 100. – sepp2k Nov 11 '10 at 01:26