A floating point number (which includes float and double in C) is represented by two parts, both of which have a fixed number of bits in which to hold their value:
- a binary fraction called the mantissa (with no bits to the left of the binary-point and) with no zeros immediately to the right of the binary point. (This compares to a decimal representation of a number. The digits to the left of the decimal point can be compared to bits to the left of a binary point and the fractional digits to the right of the decimal point compare to the fractional binary bits to the right of the binary point).
- an exponent that tells what power of 2 to multiply that mantissa by. (Compare this to scientific notation 0.1e5 has 5 as the exponent that tells what power of 10 to multiply the mantissa by)
In decimal, we can't represent the fraction 1/3 with a fixed number of fractional digits. For example, 0.333333 isn't exactly equal to 1/3 as the 3 needs to repeat infinitely.
In binary, we can't represent the fraction 1/10 with a fixed number of fractional bits. In this case the binary number 0.00011001100110011 isn't exactly equal to 1/10 as the 0011 needs to repeat indefinitely. So, when 1/10 is converted to floating point, this part is cut off to fit the available bits.
In binary, any fraction with the denominator (the bottom) divisible by 10 is infinitely repeating. That means that a lot of float values are inexact.
When added they are inexact. If you add a lot of them together, the inexactness may cancel or reinforce depending on what value was in the bits that got chopped off when we turned the infinitely repeating binary fraction into a binary fraction with a fixed number of digits.
You also get inexactitude with large numbers, fractions with a lot of digits or when adding numbers that are very different. For example, 1 billion plus .0000009 can't be represented in the available number of bits so the fraction gets rounded away.
You can see that it gets complicated. In any particular case, you can come up with the floating point representation, evaluate the error due to chopped off bits and rounding when multiplying or dividing. At that point you can see exactly why its wrong if you go to the trouble.
Simplified Example - imprecise representation
Here's an example ignoring the exponent and having the mantissa un-normalized, which means left side zeros aren't removed. (0.0001100 = 1/10 and 0.0011001 = 1/20 when chopped after 7 bits) Note that in the real case the issue happens many more digits to the right:
0.0001100 = 1/10
0.0001100
0.0001100
0.0001100 0.0011001 = 2/10 (1/5)
0.0001100 0.0011001
0.0001100 0.0011001
--------- ---------
00 <- sum of right 2 columns 11 <- sum of right column
11000 <- sum of next column 00 <- sum of next two columns
110 <- sum of next column 11 <- sum of next column
000 <- sum of other columns 11 <- sum of next column
------- 000 <- sum of other columns
0.1001000 <- sum ---------
0.1001011 <- sum
We could have the same problem with fractions like 0.12345678901234567890 that wouldn't fit in the 7 bits of my example.
What to Do
First, keep in mind that floating point numbers may not be exact. Adding or subtracting and, even more, multiplying or dividing should be expected to create inexact results.
Second, when comparing two float (or double) values, it is best to compare the difference to some "epsilon". So if, heaven forbid, you were storing US Dollar calculations in float variables, it would look like this. We don't care about anything less than half a cent:
if (fabsf(f1 - f2) >= 0.005f) ...
This means that the numbers are close to each other and, for your purposes, close enough. (@EricPostpischil points out that there is no general definition of "close enough." It has to do with what your calculations hope to accomplish.)
Doing the comparison to some small value takes care of all the loose bits that may be sitting in the low fractional digits after some floating point arithmetic takes place.
Note that if you compare to a constant, it looks similar:
if (fabsf(f1 - 1.0f) >= 0.000001f) ...
or you could do with two comparisons to check the same range of differences:
if (f1 < 0.999999f || f1 > 1.000001f) ...
I should point out, again, that each problem has its own number of interesting fractional decimal digits.
For example, if Google tells you how far apart two positions on the earth are in Kilometers, you may care to the nearest meter so you say any two positions within 0.001 (a thousandth of a kilometer) are functionally identical. Compare the difference to 0.0005. Or you may only care to the nearest block so compare the difference to 0.03 (300 meters). So compare the difference to 0.015.
The same thing applies when your measuring tools are only so accurate. If you measure with a yardstick, don't expect the result to be accurate to 1/100th of an inch.