1

I thought I understood Big-O fully and I think I still have an okay understanding but I wanted to make sure that I am understanding it correctly.

If I have code similar to this:

while(number > 0)
{
    sum += number;
    number /= 2;
}

I believe that my Big-O runtime would be O(N) since the loop would run, by my calculations, for O(N/2) and then each of the statements in the loop would run for O(1) so we end up with O(N ^ 1/2) + O(1) + O(1) so O(N ^ 1/2 + 1 + 1) and we drop the constants leaving us with O(N ^ 1/2) or O(sqrt N) complexity... I believe this would be log(n) Big-O runtime but just wanted some clarification and verification to make sure I understood this concept. I have read up quite a bit on this topic and watched videos - I just couldn't find a similar enough example anywhere and the 'number /= 2' is what is throwing me off.

For the record, it was a homework question that got me thinking about this.

1 Answers1

1

Mathematically speaking, your algorithm would never even terminate assuming that number can bear a decimal component. This is because no matter how many times you divide a positive number, you will always be left with some number greater than zero. In practice, a computer floating point number would eventually hit zero due to rounding error.

I would like to answer your question with a slight change:

while (number > 1) {
    sum += number;
    number /= 2;
}

The running time for this algorithm is actually log_2(N) (i.e. log base 2 of N). To see why, consider an input number of 16. Then the following iterations would take place, with corresponding values of number:

iteration # | number
1           | 16 = 2^4
2           | 8  = 2^3
3           | 4  = 2^2
4           | 2  = 2^1
5           | 1  = 2^0

You can convince yourself that the exponent on the base 2 determines the number of iterations. We access that value by taking the log base 2, hence the running time is O(log_2(N)).

Tim Biegeleisen
  • 502,043
  • 27
  • 286
  • 360