Is it common that a higher value for Big-O notation is used for convenience or simpler looks?
For example: I'm looking at this algorithm "bifragment gap carving" shortly explained here (page 66). If I understand it correctly the algorithm would take for any gap size n
a maximum of sum from 1 to n
but in the same document it says:
The technique does not scale for files fragmented with large gaps. If n is the number of clusters between bh and bz then in the worst case, n^2 object validations may be required before a successful recovery.
So my question is: Do I understand the algorithm wrong or was the worst-case runtime rounded up to n^2
to look nicer than a sum?