0

My Q comes from the following examples:

Scenario 1. Dropping constants
Let us assume 'n' as integer

for (int i = 0; i < n; i++) {
    // print i
}
    
for (int i = 0; i < 5 * n; i++) {
    //print i
}

Theory says Time-Complexity = O(n) and NOT O(5n)

Scenario 2. Dropping non-dominant term

for (int i = 0; i < n; i++) {
    for (int j = 0; j < n; j++) {
        //print (i,j)
    }
}
for (int i = 0; i < n; i++) {
    // print i
}

Theory says Time-Complexity = O(n2) and NOT O(n2)+O(n)

Following intuition, it seems that
Scenario 1: O(5n) in second loop will take more time because 5n > n
Scenario 2: O(n) could also contribute as it varies with n

What is a plain English explanation of "Big O" notation?

From the above SOQ, I understand that when input becomes too huge or in worst cases, the non-dominant term and constants may not contribute significantly to Time-Complexity but in practical world, smaller to medium range inputs are quite common. Aren't we losing precision here?

  • 1
    The constant factors have no meaning anyway, as each instruction/line/step is counted as one. On one architecture it could take 1ns, on another 50us. The same is true for different instructions on the same architecture. As to the non-dominant terms: The O notation is the limit for very large (infinite) n. That is how it is defined. In practice a more specific investigation considering actual n values and actual computation time for each specific instruction could make sense. But then you need a lot deeper information about the architecture or you should make measurements. – Sebastian Aug 12 '23 at 11:28
  • 2
    Yes, complexity is a very coarse measure of algorithmic efficiency. – Paul Hankin Aug 12 '23 at 11:49
  • 1
    Yes, you are right that complexity theory drops lots of precision and doesn't try to determine exact times. – Stef Aug 12 '23 at 15:51
  • 2
    However, your understanding of O() notation is far from perfect. *"Theory says Time-Complexity = O(n) and NOT O(5n)"* << Nope, theory doesn't say that the time complexity is not O(5n). In fact, theory says that O(5n) = O(n). Thus writing "O(5n)" is not more precise than writing "O(n)". – Stef Aug 12 '23 at 15:52
  • 2
    What would be more precise would be saying something like "The number of times that the operation `<` is performed is exactly 5n" (note: no big-O in this sentence) or "The number of times that the operation `++` is performed is exactly 5n" or something specific like that. One of the reasons we use big-O notation is to signal to the reader: "I am not being specific in what operations I'm counting precisely and how long they take, so keeping the constant factor and non-dominant terms would only be deceiving, it would be a fake precision" – Stef Aug 12 '23 at 15:56
  • 1
    Many simulations of one computer system with another can do one instruction in a constant number of new instructions. So the O notation is very useful to be just the right amount of generalization for determining the performance of an algorithm. – Sebastian Aug 12 '23 at 16:57
  • 2
    n vs 5n *what*? Asymptotic complexity is the most accurate time you can give *without specifying units*. – Matt Timmermans Aug 12 '23 at 17:06

1 Answers1

3

Nope!

On my machine, in scenario 1 that second loop executes faster than the first one, since my computer goes into a special “ultra turbo mode” when it detects multiplication by 5. But on my friend’s computer, multiplication is really slow so that second loop takes ten times as long as the first. Also, on my other friend’s computer the second one takes an extra hour to run, because that’s how long it takes to warm up the multiplier unit in order to start the loop.

But on all of these machines, as well as any reasonable machine you could imagine, both loops take linear asymptotic time.

See how “O(5*n)” isn’t as precise as you think?

If you are trying to quantify running time in a machine-dependent way, that’s not what asymptotic analysis is for. It’s what your profiler is for.

Sneftel
  • 40,271
  • 12
  • 71
  • 104
  • I understand the scenario for a constant. Could you add an example of when we drop a Non-Dominant term also? – PolamreddyVivekReddy Aug 13 '23 at 06:27
  • 2
    @PolamreddyVivekReddy The O notation is about the limit for large n. The ratio of the non-dominant terms compared to the dominant one gets arbitrarily small for large n, especially it gets smaller than an arbitrarily chosen fixed threshold t. So the constant factor rule also includes removing non-dominant terms as you can set a constant factor of O(n)=O((1+t)*n) for dominant + non-dominant terms. – Sebastian Aug 13 '23 at 07:49