2

I'm trying to calculate the time complexity of a simple algorithm in big O notation, but one part of it is seriously boggling my mind. Here is a simplified version of the algorithm:

int a=n
while(a>0)
{
//for loop with time complexity n^3
a = a/8
}

In this instance, it's integer division, so the while loop will terminate after a's value drops below 8. I'm not sure how to express this in terms of n. I'd also like to know how to tackle future calculations like this, where the number of loops isn't too easy to define.

Peter Cordes
  • 328,167
  • 45
  • 605
  • 847
Maria-Andersado
  • 777
  • 1
  • 9
  • 18
  • You need to show the whole code for calculations like this to be done. `//for loop with time complexity n^3` is it really `n^3` or is it really `a^3`? there is a big difference. in general what you need to do is calculate how much time is spent per iteration of the `while`, and how many iterations happen. – Chris Beck Sep 09 '15 at 22:36
  • 3
    Do you see that the loop executes O(log(n)) times? – Zach Langley Sep 09 '15 at 22:53

2 Answers2

3

I find it easier to do it the other way around in cases like this. What is the opposite of what you're doing (even approximately)? Something like:

for (a = 1; a <= n; a = a * 8)
{
    ...
}

Now, we've changed the while to a for, which has a fixed number of steps, and decrementation to incrementation, which can be easier to work with.

we have:

1, 8, 8^2, ..., 8^k <= n

8^k <= n | apply logarithm

log (8^k) <= log n

k log 8 <= log n

=> k = O(log n)

So your while loop executes O(log n) times. If you have something that is O(n^3) inside, then your entire sequence will be O(n^3 log n).

IVlad
  • 43,099
  • 13
  • 111
  • 179
  • So, you inverted the conditions for the loop so it would be changed into a format you could work with more easily, and once you had the time complexity, you applied an inverse operation in order to get the time complexity of the original loop? – Maria-Andersado Sep 09 '15 at 23:05
  • @Maria-Andersado There's no need for any inverse operation, since the loop I changed yours to is equivalent in terms of number of operations, I just did them in reverse to more easily reason about the number of steps. So its time complexity corresponds to the original loop's time complexity. – IVlad Sep 09 '15 at 23:08
0

The key to this is that it's integer division, not Zeno's paradox. Repeated division takes log(n) steps to reduce a to something that will become zero on the next step.

Another way to look at integer division by a power of two is as a bit shift. Shifting a to the right by 3 bits will produce a zero after a number of steps depending on the position of the highest set bit in a. i.e. (sizeof(a)*CHAR_BIT - leading_zero_count(a)) / 3. Bit position is the same thing as log_base2.

Peter Cordes
  • 328,167
  • 45
  • 605
  • 847