1

while N-bit integer a > 1, a = a / 2

I was thinking it is log(n) because each time you go through the while loop you are dividing a by two but my friend thinks its 2 logn(n).

gunr2171
  • 16,104
  • 25
  • 61
  • 88
John Lee
  • 33
  • 6

2 Answers2

0

Clearly your algorithm is in big-Theta(log(a)), where a is your number

But as far as I understand your problem, you want to know the asymptotic runtime depending on the amount of bits of your number

That's really difficult to say and depends on your number:

Let's say you have an n-bit integer and the Most significant bit is 1. You have to divide it n-times, to get a number smaller than 1.

Now let's look at a integer where only the least siginficant bit is 1 (so it equals the number 1 in decimal system). There you need just one division.

So i would say, it'll take n/2 in average which makes it big-Theta(n) where n is the amount of bits of your number. The worst-case is also in big-Theta(n) and the best-case is in big-Theta(1)

NOTE: Dividing a number by two in binary system has a similar effect as dividing a number by ten in decimal system

Andreas
  • 309
  • 1
  • 8
0

Dividing an integer by two can be efficiently implemented by taking a number in binary notation and shifting the bits. In the worst case, all the bits are set at you have to shift (n-1) bits for the first division, (n-2) bits for the second, etc. etc. until you shift 1 bit on the last iteration and find the number becomes equal to 1, at which point you stop. This means your algorithm must shift 1+2+...+(n-1) = n(n-1)/2 bits, making your algorithm O(n^2) in the number of bits of input.

A more efficient algorithm that will leave a with the same value is a = (a == 0 ? 0 : 1). This generates the same answer in linear time (equality checking is linear in the number of bits) and it works because your code will only leave a = 0 if a is originally zero; in all other cases, the highest-order bit ends up in the unit's place.

Patrick87
  • 27,682
  • 3
  • 38
  • 73