You have to take some care when talking about time complexity. Big-O provides a measure of asymptotic runtimes, which means you need to measure runtimes for arbitrarily large N. Most programming languages do not provide (by default) int
types that are arbitrarily large so usually we fudge the issue a bit and pretend that they do.
The first code snippet works if we pretend int can be as large as possible. It runs one loop per 1
in the binary representation of N
, so in the worst case takes log_2(N)
iterations. So it's O(log N).
The second code snippet doesn't work (that is, it produces the wrong results) if int
is larger than 32 bits. If we imagine int
getting larger to support the larger N
s we need in an asymptotic analysis, then we'd need more lines in the program to support the extra bits needed to represent N
. Because the program has to change to be correct, an asymptotic analysis isn't really possible. Any particular version of the program runs in O(1) time, but it doesn't produce correct results for large enough N, so it's not a fair comparison against the first code snippet.
You can try to fix the second program to adapt to the size of N
by instead of having hardcoded shifts and adds, by using a loop. Then I think it runs in O(log(log(N))) time since each additional shift doubles the number of bits that get aggregated.
One important thing to bear in mind is that we're assuming that shifts and bitwise operations are still O(1) no matter that we're making int
larger and larger. That's an assumption that is quite common to make in these analyses, but it's not something that's true on actual computers where machine instructions work on 4 or 8 byte quantities only.
Dynamic version of algorithm 2
Your question was Java, but here's a correct O(log(log(N)) version of algorithm 2 in Python (which does have arbitrarily large ints). You can treat it as pseudo-code if you don't know Python -- but I think this is relatively readable anyhow, and converting it to use java bignums would make it harder to understand. It computes k
, the number of bits needed to represent N
, rounded up to a power of 2, and then the constructs a list of the shifts and masks necessary to count the bits. This takes O(log(k)) time, and since k=log(N), the overall algorithm takes O(log(log(N)) time (where here time means bitwise or arithmetic operations).
def generate_ops(k):
ops = []
mask = (1 << k) - 1
while k:
ops.append((k, mask))
k >>= 1
mask = mask ^ (mask << k)
return reversed(ops)
def bit_count(N):
k = 1
while (1 << k) < N:
k *= 2
for shift, mask in generate_ops(k):
N = (N & mask) + ((N >> shift) & mask)
return N