First, it is of paramount importance that one not confuse the bound with the case. A bound - like Big-Oh, Big-Omega, Big-Theta, etc. - says something about a rate of growth. A case says something about the kinds of input you're currently considering being processed by your algorithm.
Let's consider a very simple example to illustrate the distinction above. Consider the canonical "linear search" algorithm:
LinearSearch(list[1...n], target)
1. for i := 1 to n do
2. if list[i] = target then return i
3. return -1
There are three broad kinds of cases one might consider: best, worst, and average cases for inputs of size n
. In the best case, what you're looking for is the first element in the list (really, within any fixed number of the start of the list). In such cases, it will take no more than some constant amount of time to find the element and return from the function. Therefore, the Big-Oh and Big-Omega happen to be the same for the best case: O(1)
and Omega(1)
. When both O
and Omega
apply, we also say Theta
, so this is Theta(1)
as well.
In the worst case, the element is not in the list, and the algorithm must go through all n
entries. Since f(n) = n
happens to be a function that is bound from above and from below by the same class of functions (linear ones), this is Theta(n)
.
Average case analysis is usually a bit trickier. We need to define a probability space for viable inputs of length n
. One might say that all valid inputs (where integers can be represented using 32 bits in unsigned mode, for instance) are equally probable. From that, one could work out the average performance of the algorithm as follows:
- Find the probability that
target
is not represented in the list. Multiply by n
.
- Given that
target
is in the list at least once, find the probability that it appears at position k
for each 1 <= k <= n
. Multiply each P(k)
by k
.
- Add up all of the above to get a function in terms of
n
.
Notice that in step 1 above, if the probability is non-zero, we will definitely get at least a linear function (exercise: we can never get more than a linear function). However, if the probability in step 1 is indeed zero, then the assignment of probabilities in step 2 makes all the difference in determining the complexity: you can have best-case behavior for some assignments, worst-case for others, and possibly end up with behavior that isn't the same as best (constant) or worst (linear).
Sometimes, we might speak loosely of a "general" or "universal" case, which considers all kinds of input (not just the best or the worst), but that doesn't give any particular weighting to inputs and doesn't take averages. In other words, you consider the performance of the algorithm in terms of an upper-bound on the worst-case, and a lower-bound on the best-case. This seems to be what you're doing.
Phew. Now, back to your question.
Are there functions which have different O
and Omega
bounds? Definitely. Consider the following function:
f(n) = 1 if n is odd, n if n is even.
The best case is "n
is odd", in which case f
is Theta(1)
; the worst case is "n
is even", in which case f
is Theta(n)
; and if we assume for the average case that we're talking about 32-bit unsigned integers, then f
is Theta(n)
in the average case, as well. However, if we talk about the "universal" case, then f
is O(n)
and Omega(1)
, and not Theta
of anything. An algorithm whose runtime behaves according to f
might be the following:
Strange(list[1...n], target)
1. if n is odd then return target
2. else return LinearSearch(list, target)
Now, a more interesting question might be whether there are algorithms for which some case (besides the "universal" case) cannot be assigned some valid Theta
bound. This is interesting, but not overly so. The reason is that you, during your analysis, are allowed to choose the cases that constitutes best- and worst-case behavior. If your first choice for the case turns out not to have a Theta
bound, you can simply exclude the inputs that are "abnormal" for your purposes. The case and the bound aren't completely independent, in that sense: you can often choose a case such that it has "good" bounds.
But can you always do it?
I don't know, but that's an interesting question.