Given a big decimal integer, how do I compute the bit-length, i.e. the number of digits of its binary representation?
Example: bitlength("590295810358705712624") == 70
The arithmetic expression is: bitlength = ⌊log₂(n)⌋ + 1
For small integers, this expression can be translated to standard library calls. But what about big integers with an arbitrary number of digits?
We could compute a very close estimate of log₂
from one or two leading digits:
log₂(8192) ≥ log₂(8100) = log₂(81) + log₂(10) * 2 = 12.98...
Plugging this into the arithmetic expression above, we get a very good lower bound for the bit-length. But in some cases, we have to inspect more digits, potentially up to the least significant one, in order to get an exact result:
bitlength("1125899906842623") == 50
bitlength("1125899906842624") == 51
Any suggestions on how to compute the bit-length exactly and efficiently in all cases?