2

I came across a question asking to describe the computational complexity in Big O of the following code:

i = 1;
while(i < N) {
    i = i * 2;
}

I found this Stack Overflow question asking for the answer, with the most voted answer saying it is Log2(N).

On first thought that answer looks correct, however I remember learning about psuedo polynomial runtimes, and how computational complexity measures difficulty with respect to the length of the input, rather than the value.

So for integer inputs, the complexity should be in terms of the number of bits in the input.

Therefore, shouldn't this function be O(N)? Because every iteration of the loop increases the number of bits in i by 1, until it reaches around the same bits as N.

SW Williams
  • 559
  • 1
  • 5
  • 18

3 Answers3

1

This code might be found in a function like the one below:

function FindNextPowerOfTwo(N) {
    i = 1;
    while(i < N) {
        i = i * 2;
    }
    return i;
}

Here, the input can be thought of as a k-bit unsigned integer which we might as well imagine as having as a string of k bits. The input size is therefore k = floor(log(N)) + 1 bits of input.

The assignment i = 1 should be interpreted as creating a new bit string and assigning it the length-one bit string 1. This is a constant time operation.

The loop condition i < N compares the two bit strings to see which represents the larger number. If implemented intelligently, this will take time proportional to the length of the shorter of the two bit strings which will always be i. As we will see, the length of i's bit string begins at 1 and increases by 1 until it is greater than or equal to the length of N's bit string, k. When N is not a power of two, the length of i's bit string will reach k + 1. Thus, the time taken by evaluating the condition is proportional to 1 + 2 + ... + (k + 1) = (k + 1)(k + 2)/2 = O(k^2) in the worst case.

Inside the loop, we multiply i by two over and over. The complexity of this operation depends on how multiplication is to be interpreted. Certainly, it is possible to represent our bit strings in such a way that we could intelligently multiply by two by performing a bit shift and inserting a zero on the end. This could be made to be a constant-time operation. If we are oblivious to this optimization and perform standard long multiplication, we scan i's bit string once to write out a row of 0s and again to write out i with an extra 0, and then we perform regular addition with carry by scanning both of these strings. The time taken by each step here is proportional to the length of i's bit string (really, say that plus one) so the whole thing is proportional to i's bit-string length. Since the bit-string length of i assumes values 1, 2, ..., (k + 1), the total time is 2 + 3 + ... + (k + 2) = (k + 2)(k + 3)/2 = O(k^2).

Returning i is a constant time operation.

Taking everything together, the runtime is bounded from above and from below by functions of the form c * k^2, and so a bound on the worst-case complexity is Theta(k^2) = Theta(log(n)^2).

Patrick87
  • 27,682
  • 3
  • 38
  • 73
0

It depends on important question: "Is multiplication constant operation"?

In real world it is usually considered as constant, because you have fixed 32 or 64 bit numbers and multiplicating them takes always same (=constant) time. On the other hand - you have limitation that N < 32/64 bit (or any other if you use it).

In theory where you do not consider multiplying as constant operation or for some special algorithms where N can grow too much to ignore the multiplying complexity, you are right, you have to start thinking about complexity of multiplying.

The complexity of multiplying by constant number (in this case 2) - you have to go through each bit each time and you have log_2(N) bits.

And you have to do hits log_2(N) times before you reach N

Which ends with complexity of log_2(N) * log_2(N) = O(log_2^2(N))


PS: Akash has good point that multiply by 2 can be written as constant operation, because the only thing you need in binary is to "add zero" (similar to multiply by 10 in "human readable" format, you just add zero 4333*10 = 43330)

However if multiplying is not that simple (you have to go through all bits), the previous answer is correct

libik
  • 22,239
  • 9
  • 44
  • 87
0

In the given example, you are not increasing the value of i by 1, but doubling it at every time, thus it is moving 2 times faster towards N. By multiplying it by two you are reducing the size of search space (between i to N) by half; i.e, reducing the input space by the factor of 2. Thus the complexity of your program is - log_2 (N).

If by chance you'd be doing -

i = i * 3;

The complexity of your program would be log_3 (N).

Akash Chandwani
  • 550
  • 1
  • 9
  • 20
  • You did not understand the question. Imagine that the N is incredible large, multiplying incredible large number is not constant operation, what if the number has i.e. quadrilions of bites? Do you know that then it can take thousand years for current computer to multiply it? – libik Jan 03 '18 at 12:44
  • Multiplying by 2 is a constant operations since the you can achieve it by left shift of bits. N is large i agree, but i is not (in the starting at least). Think of i and N in a number line, i is approaching N faster in number line because it is multiplied by 2 at every iteration. – Akash Chandwani Jan 04 '18 at 06:16
  • That is actually quite good answer :). However your asnwer is still partially incorrect as multiplying with 3 cannot be done such easily and you have to go through all bits, right? – libik Jan 04 '18 at 10:31
  • Yes, you are right, multiplying by 3 is not as simple as multiplying by 2 at machine level, but it is still a constant time operation. Since O(1) + O(1) is O(1) asymptotically. – Akash Chandwani Jan 04 '18 at 11:28
  • Can you explain how you can multiply by 3 in constant time for a number that has quadrilion (or any other number) of bits? – libik Jan 04 '18 at 12:03
  • If you have quadrillion bits, that means your N ~= 2^quadrillion. therefore log_2 (2^quadrillion) = quadrillion. Your complexity is the range of quadrillion. ( I think you are thinking in wrong direction - we are not interested in finding complexity of multiplying 3 in constant time, but the problem you've posted it ). – Akash Chandwani Jan 05 '18 at 13:08
  • This is correct ```log_2 (2^quadrillion) = quadrillion```. So we can agree that I have to make quadrillion operation, to multiply it by 3, right? (I have to go through each bit) However it is only one multiplying. How many times do we have to multiply? This much times ```log_3 (2^quadrillion)```. Which leads to (```N=2^quadrillion``` - in our case) ```O(N) = log_3(N) * log_2(N)``` – libik Jan 05 '18 at 13:15
  • You mean T(N) = O(log_3(N) * log_2(N)), which is again in O(log (N)) range – Akash Chandwani Jan 05 '18 at 13:20
  • It is not, ```log(N)``` is not asymptotically equal to ```log^2(N)``` – libik Jan 05 '18 at 13:25
  • Sorry, your equation is not correct. This one is correct ```log (N) + log (N) = log (2N)```. Well lets try real example. ```N = 2^1000```, then ```log(N)=1000``` which leads to ```log(N)*log(N)=1000*1000=1000000```, however ```log(2N)=log(2^2000)=2000``` – libik Jan 05 '18 at 13:35
  • I am not sure of actual simplification of log_3(N) * log_2(N), but I don't it's equivalent to log (log ((N)) – Akash Chandwani Jan 05 '18 at 13:36
  • I am not sure if you can simplify it, but often the base of logarithm is not even mentioned, which can you get to something that has no more simplification - ```log(N)*log(N)=log^2(N)```. Be careful as ```O(log( log(N))) <= O(log(n)) <= O(log^2(n))``` – libik Jan 05 '18 at 13:42
  • So the last statement - the base of logarithm is not important because of this: ```log_2(2^x) = x``` and ```(log_4(2^x) = x/2``` which leads to ```log_2(2^x) = 2*log_4(2^x)``` and we know that assymptoticaly we can ignore constants, therefore ```O(log_2(2^x)) = O(log_4(2^x))```. – libik Jan 05 '18 at 13:49