This code might be found in a function like the one below:
function FindNextPowerOfTwo(N) {
i = 1;
while(i < N) {
i = i * 2;
}
return i;
}
Here, the input can be thought of as a k
-bit unsigned integer which we might as well imagine as having as a string of k
bits. The input size is therefore k = floor(log(N)) + 1
bits of input.
The assignment i = 1
should be interpreted as creating a new bit string and assigning it the length-one bit string 1
. This is a constant time operation.
The loop condition i < N
compares the two bit strings to see which represents the larger number. If implemented intelligently, this will take time proportional to the length of the shorter of the two bit strings which will always be i
. As we will see, the length of i
's bit string begins at 1 and increases by 1 until it is greater than or equal to the length of N
's bit string, k
. When N
is not a power of two, the length of i
's bit string will reach k + 1
. Thus, the time taken by evaluating the condition is proportional to 1 + 2 + ... + (k + 1) = (k + 1)(k + 2)/2 = O(k^2)
in the worst case.
Inside the loop, we multiply i
by two over and over. The complexity of this operation depends on how multiplication is to be interpreted. Certainly, it is possible to represent our bit strings in such a way that we could intelligently multiply by two by performing a bit shift and inserting a zero on the end. This could be made to be a constant-time operation. If we are oblivious to this optimization and perform standard long multiplication, we scan i
's bit string once to write out a row of 0
s and again to write out i
with an extra 0
, and then we perform regular addition with carry by scanning both of these strings. The time taken by each step here is proportional to the length of i
's bit string (really, say that plus one) so the whole thing is proportional to i
's bit-string length. Since the bit-string length of i
assumes values 1, 2, ..., (k + 1)
, the total time is 2 + 3 + ... + (k + 2) = (k + 2)(k + 3)/2 = O(k^2)
.
Returning i
is a constant time operation.
Taking everything together, the runtime is bounded from above and from below by functions of the form c * k^2, and so a bound on the worst-case complexity is Theta(k^2) = Theta(log(n)^2)
.