Finding time complexity is often described in a way that is not really very helpful. Here is how it works for Selection Sort.
passes
The very first time through the algorithm, you must scan all n elements of the data.
The very next time through (on recursion), you must scan all but one, which is (n-1).
And so on. We can write the number of times we look at/compare an element as:
n + (n-1) + (n-2) + ... + 2 + 1
(You can quibble about that last term, but for simplicity here we just won't care. You'll see why in just a second.)
math identities
This particular series (notice all the additions) is called an “arithmetic progression”. The difference between each term is 1. The first term is n and the last term is 1 (or 2, whatever).
And using some math most people don’t remember (or weren’t taught) in High School, we can rewrite that sum as:
n(n+1)
──────
2
(Yes, again, that last term is actually supposed to be a two, even though I’ve left it at one.)
as n grows arbitrarily large
Big O doesn’t care much about friendly values of n. It cares about what happens when n gets arbitrarily big. We say “n grows towards infinity”.
As it does, we can notice two things happening:
- That division by 2 becomes insignificant: ∞/2 → ∞
- That addition by 1 (or whatever) is also insignificant: ∞(∞+1) → ∞(∞)
So ultimately, we have infinity, er, n multiplied by itself. The equation simplifies to:
n(n+1)
────── → n²
2
complexity bounds
The worst case behavior is n², so we annotate that as “O(n²)”. But notice that the best case is also n². We annotate that as “Ω(n²)”. Finally, since the best and worst case behaviors are the same, we have a very nice tight bound on the behavior of the function. We annotate this as “Θ(n²)”.
Hence, selection sort has Θ(n²) complexity.
holy quiznak! I’m supposed to do this myself!!?
Yes, unfortunately, figuring out complexity is one of those things that people treat as if it were really easy — even when they don’t understand it well themselves. It takes some familiarity with math and some practice. The typical response is as in the links provided you above: ‘look at these common cases and choose the one that most closely matches’. I personally find that less satisfyingly instructive. It is also one of those things that university professors expect you to pick up right away.
My advice: don’t worry too hard. Do your best to understand the underlying principles. What we are doing is finding a function (like y=x²) that represents an algorithm’s behavior whenever its input is really large (n → ∞).
Being able to follow through the paths that the algorithm takes and recognizing when those paths are expensive is a more important ability than being able to come up with a mathematician’s answer. If you really need it you can always find a mathematician to help, either at the university or on the internet if you ask in the right places.
And, of course, feel free to take time to try to understand better and practice. Being able to do it right is a valuable skill, regardless of whether others see it.
:O)