0

I'm at a loss as to how to calculate the best case run time - omega f(n) - for this algorithm. In particular, I don't understand how there could be a "best case" - this seems like a very straightforward algorithm. Can someone give me some hints / a general approach for solving these types of questions?

for i = 0 to n do
    for j = n to 0 do
        for k = 1 to j-i do
            print (k)
Jakub Kotowski
  • 7,411
  • 29
  • 38
AmazingVal
  • 301
  • 4
  • 11
  • 2
    @C.B. best case, worst case is in fact not related to Big O, Omega, Theta, ... Big O, Omega, Theta just say something about lower and upper bounds of functions. That's it. Nothing about worst case/best case. – Jakub Kotowski Feb 04 '14 at 20:13
  • Sorry! I meant best case!! – AmazingVal Feb 04 '14 at 20:20
  • 1
    omega is not best case. omega provide a lower bound on the complexity of the algorithm. It just means "It will not take more than x steps when the complexity parameter is big enough". Either you are talking about lower bound, or best case. It can't be both. – UmNyobe Feb 04 '14 at 20:47

4 Answers4

3

Have a look at this question and answers Still sort of confused about Big O notation it may help you sort out some of the confusion between worst case, best case, Big O, Big Omega, etc.

As to your specific problem, you are asked to find some (asymptotical) lower bound of the time complexity of the algorithm (you should also clarify whether you mean Big Omega or small omega).

Otherwise you are right, this problem is quite simple. You just have to think about how many times print (k) will be executed for a given n.

You can see that the first loop goes n-times. The second one also n-times.

For the third one, you see that if i = 1, then j = n-1, thus k is 1 to n-1-1 = n-2, which makes me think whether your example is correct and if there should be i-j instead.

In any case, the third loop will execute at least n/2 times. It is n/2 because you subtract j-i and j decreases while i increases so when they are n/2 the result will be 0 and then it will be negative at which point the innermost loop will not execute anymore.

Therefore print (k) will execute n*n*n/2-times which is Omega(n^3) which you can easily verify from definition.

Just beware, as @Yves points out in his answer, that this is all assuming that print(k) is done in constant time which is probably what was meant in your exercise.

In the real world, it wouldn't be true because printing the number also takes time and the time increases with log(n) if print(k) prints in base 2 or base 10 (it would be n for base 1).

Otherwise, in this case the best case is the same as the worst case. There is just one input of size n so you cannot find "some best instance" of size n and "worst instance" of size n. There is just instance of size n.

Community
  • 1
  • 1
Jakub Kotowski
  • 7,411
  • 29
  • 38
  • how did you get n/2 ?! – AmazingVal Feb 04 '14 at 20:38
  • that's why my professor said, so I know it's the correct answer, but it seems so arbitrary to me – AmazingVal Feb 04 '14 at 20:38
  • the point is that you substract `j-i` and `j` decreases while `i` increases so when they are `n/2` the result will be 0 and then it will be negative at which point the innermost loop will not execute anymore – Jakub Kotowski Feb 04 '14 at 20:41
  • ahhh that makes perfect sense! – AmazingVal Feb 04 '14 at 20:44
  • I do have one more question though. Why is the best case different from the worst case here? I mean, would the best case be the same as the worst case? – AmazingVal Feb 04 '14 at 21:02
  • I would say that in this case the best case is the same as the worst case. As I said in the other comment, there is just one input of size `n` so you cannot find some best instance of size `n` and worst instance of size `n` there is just one. Just beware, as @Yves points out in his answer, that this is all assuming that `print(k)` is done in constant time which is probably what was meant in your exercise but in real world it wouldn't be true because printing the number also takes time and the time increases with `log(n)` if print(k) prints in base 2 or base 10 (it would be `n` for base 1). – Jakub Kotowski Feb 04 '14 at 21:21
2

I do not see anything in that function that varies between different executions except for n

aside from that, as far as I can tell, there is only one case, and that is both the best case and the worst case at the same time

it's O(n3) in case you're wondering.

  • My professor responded, "consider the values of i, such that 0 < i < n/4 and values of j, such that 3n/4 < j < n" I don't understand why it makes sense to restrict i and j.. – AmazingVal Feb 04 '14 at 20:27
0

If this is a sadistic exercise, the complexity is probably Theta(n^3.Log(n)), because of the triple loop but also due to the fact that the number of digits to be printed increases with n.

-1

As n the only factor in the behavior of your algorithm, best case is n being 0. One initialization, one test, and then done...

Edit: The best case describe the algorithm behavior under optimal condition. You provide us a code snippet which depend on n. What is the nbest case? n being zero.

Another example : What is the best case of performing a linear search on an array? It is either that the key match the first element or the array is empty.

UmNyobe
  • 22,539
  • 9
  • 61
  • 90
  • That's not how it works. The question is how much more time the algorithm takes if you double the size of the input. – Jakub Kotowski Feb 04 '14 at 20:32
  • I read the question and it is not about this. I know complexity and I know what best case is. And they are not the same thing. – UmNyobe Feb 04 '14 at 20:33
  • Well, your interpretation of best case seems quite non-standard to me. Have a look for example at http://en.wikipedia.org/wiki/Best,_worst_and_average_case and at the best case of sort algorithms. – Jakub Kotowski Feb 04 '14 at 20:35
  • what is the optimal condition of the algorithm above? ***n being 0***. Try again – UmNyobe Feb 04 '14 at 20:37
  • Ok, I'll try again. `the best-case runtime complexity of the algorithm is the function defined by the minimum number of steps taken on any instance of size a.` e.g. http://www.cs.cmu.edu/~adamchik/15-121/lectures/Algorithmic%20Complexity/complexity.html and elsewhere. So you take the best instance of size `n` that you can and have a look how quick the algorithm is. In this case, there is only one instance of size `n`. Does it make sense? – Jakub Kotowski Feb 04 '14 at 20:50