Go read about https://en.wikipedia.org/wiki/Significant_figures - floating point has a fixed number of significant digits (precision given by mantissa width), but they can be shifted to any magnitude within the exponent range (the exponent is encoded separately).
In floating-point, it's a certain number of binary bits, so huge floats can only represent a multiple of 4, 8, 16, 32, and ever coarser the larger the floating point number gets.
The wiki article for double-precision floating point is very good. It points, out among other things:
- Integers from −253 to 253 (−9,007,199,254,740,992 to 9,007,199,254,740,992) can be exactly represented
- Integers between 253 and 254 = 18,014,398,509,481,984 round to a multiple of 2 (even number)
- Integers between 254 and 255 = 36,028,797,018,963,968 round to a multiple of 4
For large doubles like 2^53
, there's no mantissa bits left to encode a fractional part: the exponent field shifts them all up into the integer part. Only smaller numbers like 1.125 can have a fractional part. (And you still can't do 1.00000000000000000000000000000000001
because the two non-zero parts are too far away from each other.)
If your reasoning worked, shouldn't every floating point number be able to represent an infinite number of digits, if you include the fractional part to the right of the decimal point? Obviously not with a fixed 64-bit value, there are only 2^64 different bit-patterns, so it's a matter of how you spread out those values over the range you want to represent.
Floating point chooses a fixed number of digits, and lets the decimal point "float" to different positions based on the exponent. (Actually binary digits, and thus not a decimal point: the correct term would be radix point for base 2 digits. Unless you're using a "decimal floating point" format.)
For example, imagine an infinite string of zeros to the left and right of 4 decimal digits, but you can put a decimal point anywhere within the exponent range limit.
0000012340000000.0 # large integer
000001234.00000000 # small integer
000001.23400000000 # small number near 1
0.0000123400000000 # quite small number
You can equivalently think about the 1.234
mantissa being shifted left or right by the exponent, relative to the decimal point, to create a variable-sized fixed-point representation that actually has zero bits to fill space.
I'm using decimal for illustration purposes; only a few CPUs have instructions to support a decimal exponent (e.g. some PowerPC). The concept is identical for binary (base 2), with the radix point at some position.
I'm also leaving out some things like the implicit 1 at the top of the binary mantissa implied by a non-zero exponent encoding, and the way the exponent is actually encoded with a bias. See the wiki article for full details.
Also instructive to play around with https://www.h-schmidt.net/FloatConverter/IEEE754.html for single-precision floating point which shows you the bit-pattern (with checkboxes to modify bits), as well as the value represented separately by the mantissa and exponent fields, as well as the actual value represented overall.
For a more advanced look at some neat floating-point stuff, see Bruce Dawson's series of floating-point articles. Comparing Floating Point Numbers, 2012 Edition has links to all 16 of them, such as There are Only Four Billion Floats–So Test Them All!.
Some of them focus on practicalities of the FP environment in C on x86 and x86-64, another points out that incrementing the integer bit-pattern of a float is how nextafter
can be implemented, increasing its magnitude. (The bias in the exponent encoding is what makes FP bit-patterns comparable as sign/magnitude integers, except for the NAN special case.)