My Q comes from the following examples:
Scenario 1. Dropping constants
Let us assume 'n' as integer
for (int i = 0; i < n; i++) {
// print i
}
for (int i = 0; i < 5 * n; i++) {
//print i
}
Theory says Time-Complexity = O(n) and NOT O(5n)
Scenario 2. Dropping non-dominant term
for (int i = 0; i < n; i++) {
for (int j = 0; j < n; j++) {
//print (i,j)
}
}
for (int i = 0; i < n; i++) {
// print i
}
Theory says Time-Complexity = O(n2) and NOT O(n2)+O(n)
Following intuition, it seems that
Scenario 1: O(5n) in second loop will take more time because 5n > n
Scenario 2: O(n) could also contribute as it varies with n
What is a plain English explanation of "Big O" notation?
From the above SOQ, I understand that when input becomes too huge or in worst cases, the non-dominant term and constants may not contribute significantly to Time-Complexity but in practical world, smaller to medium range inputs are quite common. Aren't we losing precision here?