You ask:
If it's O(n^3), then how does one justify that the runtime is always going to be closer to O(n^2) than to O(n^3) in practice?
The answer is that "closer" doesn't matter. Only "bigger than" and "smaller than" matter.
Big O gives an upper bound
If the runtime complexity of a procedure eventually exceeds c * n^2 for any constant c or larger and a big enough value n, then it cannot possibly be O(n^2).
That's because the big-O operator doesn't give an estimate; it gives an upper bound. Even a procedure that runs in constant time is still O(n^3). (It's also O(n^2), O(log(n)), O(n!), and so on). That's because it's smaller than all of those runtimes for some constant multiplier c and large values of n.
A concrete example
To make this concrete, consider the following:
>>> def fa(n):
... return n * n * n // 10
...
>>> def f2(n):
... return n * n
...
>>> def f3(n):
... return n * n * n
...
For the above runtimes and small n
, fa
is still less than or equal to f2
:
>>> fa(10), f2(10), f3(10)
(100, 100, 1000)
But if we multiply n
by 10, fa
exceeds f2
.
>>> fa(100), f2(100), f3(100)
(100000, 10000, 1000000)
And it's not hard to see that even if we boost f2
by a constant multiplier c
, we can still find a value of n
such that fa(n)
is larger.
>>> def f2_boost(n, c):
... return f2(n) * c
...
>>> fa(1000), f2_boost(1000, 10), f3(1000)
(100000000, 10000000, 1000000000)
Why use a constant multiplier?
You might still find it confusing that a procedure with a runtime of n^3 * 0.1 falls in the same big-O category as a procedure with a runtime of 1000 * n^3. After all, the absolute difference between these two runtimes is huge!
This is a bit harder to explain, but it starts to make sense when you remind yourself that big O notation is supposed to describe scaling behavior. Or, to put it another way, big O notation is supposed to describe how the runtime varies when we change the size of the units that we use for our measurements.
Let's take a concrete example: imagine you want to know the height of a building. And suppose someone says "oh, it's about 300 meters." You might feel satisfied by that response; you might not care that it's really 315 meters; 300 is a good enough estimate. But what if, instead, they said "oh, it's about 300 meters... or was that 300 feet?" You'd probably feel a lot less satisfied, because 300 meters would be more than three times as high as 300 feet.
In computer science, we have exactly that problem when measuring time. In fact, it's even worse. Different computers might be vastly faster or slower than others. If we measure time in "number of calculations performed by a computer," then for some computers, we will be measuring time in hundredths of a second, and for other computers, we will be measuring time in billionths of a second. If we want to describe the behavior of the algorithm in a way that isn't skewed by that huge difference, then we need a measurement that is "scale invariant" -- that is, a measurement that gives the same answer whether we use hundredths of seconds or billionths of seconds as our units.
Big O notation provides such a measurement. It gives us a way to measure runtime without needing to worry so much about the size of the units we use to measure time. In essence, saying that an algorithm is O(n^2) is saying that for any unit of time equal to or larger than some value c, there is a corresponding value for n such that our procedure will complete before c * n^2 for all larger values of n.
Estimating runtimes
If you want to talk about estimating runtimes, then you want a measurement called "big theta." Take a look at this answer for details. In short, big O gives an upper bound for arbitrarily large multiplier c; big omega gives a lower bound for arbitrarily large multiplier c; and big theta gives a function that defines both an upper bound and a lower bound, depending on the choice of multiplier c.
In your case, the big theta value would be O(n^3) because you can choose a constant multiplier c1 such that c1 * n^3 is always larger than n^3 / 10, and you can choose a constant multiplier c2 such that c2 * n^3 is always smaller than n^3 / 10.