Your computer uses binary floating point internally. Type float
has 24 bits of precision, which translates to approximately 7 decimal digits of precision.
Your number, 2118850.132, has 10 decimal digits of precision. So right away we can see that it probably won't be possible to represent this number exactly as a float
.
Furthermore, due to the properties of binary numbers, no decimal fraction that ends in 1, 2, 3, 4, 6, 7, 8, or 9 (that is, numbers like 0.1 or 0.2 or 0.132) can be exactly represented in binary. So those numbers are always going to experience some conversion or roundoff error.
When you enter the number 2118850.132 as a float
, it is converted internally into the binary fraction 1000000101010011000010.01
. That's equivalent to the decimal fraction 2118850.25. So that's why the .132 seems to get converted to 0.25.
As I mentioned, float
has only 24 bits of precision. You'll notice that 1000000101010011000010.01
is exactly 24 bits long. So we can't, for example, get closer to your original number by using something like 1000000101010011000010.001
, which would be equivalent to 2118850.125, which would be closer to your 2118850.132. No, the next lower 24-bit fraction is 1000000101010011000010.00
which is equivalent to 2118850.00, and the next higher one is 1000000101010011000010.10
which is equivalent to 2118850.50, and both of those are farther away from your 2118850.132. So 2118850.25 is as close as you can get with a float
.
If you used type double
you could get closer. Type double
has 53 bits of precision, which translates to approximately 16 decimal digits. But you still have the problem that .132 ends in 2 and so can never be exactly represented in binary. As type double
, your number would be represented internally as the binary number 1000000101010011000010.0010000111001010110000001000010
(note 53 bits), which is equivalent to 2118850.132000000216066837310791015625, which is much closer to your 2118850.132, but is still not exact. (Also notice that 2118850.132000000216066837310791015625 begins to diverge from your 2118850.1320000000 after 16 digits.)
So how do you avoid this? At one level, you can't. It's a fundamental limitation of finite-precision floating-point numbers that they cannot represent all real numbers with perfect accuracy. Also, the fact that computers typically use binary floating-point internally means that they can almost never represent "exact-looking" decimal fractions like .132 exactly.
There are two things you can do:
- If you need more than about 7 digits worth of precision, definitely use type
double
, don't try to use type float
.
- If you believe your data is accurate to three places past the decimal, print it out using
%.3f
. If you take 2118850.132 as a double
, and printf it using %.3f
, you'll get 2118850.132, like you want. (But if you printed it with %.12f
, you'd get the misleading 2118850.132000000216.)