Say I have a number, 100000
, I can use some simple maths to check its size, i.e. log(100000) -> 5
(base 10 logarithm). Theres also another way of doing this, which is quite slow. std::string num = std::to_string(100000)
, num.size()
. Is there an way to mathematically determine the length of a number? (not just 100000, but for things like 2313455, 123876132.. etc)

- 68
- 2
- 10
-
2Do you mean a mathematical way _other_ than the base 10 logarithm? – Nathan Pierson May 25 '21 at 01:40
-
Yes, I wouldn't get a straight `5` doing a base 10 log on `250000`. @NathanPierson – asjhdbashjdbasjhdbhjb May 25 '21 at 01:41
-
Maybe you could use the modulo operator on the number until it equals 0 and check how many times it took? That's more programmatic than mathematical, though... – Nick Reed May 25 '21 at 01:42
-
The best way can't be better than O(n). – Yves May 25 '21 at 01:42
-
1Does this answer your question? [C++ - how to find the length of an integer](https://stackoverflow.com/questions/22648978/c-how-to-find-the-length-of-an-integer) – May 25 '21 at 01:45
-
1Actually, the integer value of a base 10 logarithm on a number might work. – asjhdbashjdbasjhdbhjb May 25 '21 at 01:45
-
You can do some trickery I think with the floating point standard. Using pointers you can get the exponent of the number and after that you only need a const division if I am correct – Lala5th May 25 '21 at 01:48
-
@Yves Using the floating point standard with ptr manipulation can get the exponent and that yields O(1) I think – Lala5th May 25 '21 at 01:49
-
@Lala5th How would you do that? The first 10 binary digits after the sign bit? – asjhdbashjdbasjhdbhjb May 25 '21 at 01:57
-
@asjhdbashjdbasjhdbhjb I think for single precision it is the first 8 bit. There is some maths you need to do on it (subtraciton and division) but after that it shoud work for values > 1 – Lala5th May 25 '21 at 02:07
-
Is it ok if the size is in powers of 2 instead of powers of 10? Getting the size in powers of 2 is absolutely trivial. Getting the size as a power of 10 is trickier. – Mooing Duck May 25 '21 at 02:14
-
@asjhdbashjdbasjhdbhjb Added a rough proof of concept below. For double precision it can be adapted(i.e. 23 -> 52 and 127->1023) – Lala5th May 25 '21 at 02:28
3 Answers
Why not use ceil
? It rounds up to the nearest whole number - you can just wrap that around your log function, and add a check afterwards to catch the fact that a power of 10 would return 1 less than expected.

- 4,989
- 4
- 17
- 37
-
1would screw up with a double like 9.9999997. Not sure if that's on the table or not, though. – user4581301 May 25 '21 at 01:46
Here is a solution to the problem using single precision floating point numbers in O(1):
#include <cstdio>
#include <iostream>
#include <cstring>
int main(){
float x = 500; // to be converted
uint32_t f;
std::memcpy(&f, &x, sizeof(uint32_t)); // Convert float into a manageable int
uint8_t exp = (f & (0b11111111 << 23)) >> 23; // get the exponent
exp -= 127; // floating point bias
exp /= 3.32; // This will round but for this case it should be fine (ln2(10))
std::cout << std::to_string(exp) << std::endl;
}
For a number in scientific notation a*10^e
this will return e (when 1<=a<10
), so the length of the number (if it has an absolute value larger than 1), will be exp + 1
.
For double precision this works, but you have to adapt it (bias is 1023 I think, and bit alignment is different. Check this)
This only works for floating point numbers, though so probably not very useful in this case. The efficiency in this case relative to the logarithm will also be determined by the speed at which int -> float
conversion can occur.
Edit:
I just realised the question was about double. The modified result is:
int16_t getLength(double a){
uint64_t bits;
std::memcpy(&bits, &a, sizeof(uint64_t));
int16_t exp = (f >> 52) & 0b11111111111; // There is no 11 bit long int so this has to do
exp -= 1023;
exp /= 3.32;
return exp + 1;
}
There are some changes so that it behaves better (and also less shifting).
You can also use frexp()
to get the exponent without bias.

- 1,137
- 7
- 18
-
-
-
you can fix your answer, it's better than the bit-hacks. Or at least the second answer, which is an edit anyway. – user1095108 May 25 '21 at 15:26
-
@user1095108 `frexp()` probably does the same, or something similar. I don't want to replace my answer with just use `frexp()` because it a) doesn't change the fact that some floating point pointer hacking happens and b) it just abstracts away the reason why it works, and how. – Lala5th May 25 '21 at 15:45
-
it would provide an alternative to what you provide in answer number 1. – user1095108 May 25 '21 at 17:18
If the number is whole, keep dividing by 10, until you're at 0. You'd have to divide 100000 6 times, for example. For the fractional part, you need to keep multiplying by 10 until trunc(f) == f
.

- 14,119
- 9
- 58
- 116
-
that seems quite slow, comparatively, to the logarithm solution – asjhdbashjdbasjhdbhjb May 25 '21 at 02:11
-
is log() fast? I didn't know it was that fast. Anyway, you can write your own in which case you will have muls instead of divs. We would have to make a proper comparison. For bignums floats are inappropriate anyway. – user1095108 May 25 '21 at 02:18
-
1@asjhdbashjdbasjhdbhjb For low N, the simple and slow algorithms tend outperform smarter algorithms because the added smarts you have to put into the algorithm to get from, say, O(N) to O(Log (N)) often have costs that offset the gains of the smarter algorithm. The most you get out of a typical `double` implementation is about 300 digits, which may or may not be peanuts, depending on the speed of the `log` implementation. 6 digits is laughable and will almost certainly compute faster than `log`. – user4581301 May 25 '21 at 05:05
-