Is there a simple way to tell whether a particular number gets rounded up in it's floating point representation? The reason I ask is related to a question I asked here and a similar question was asked here, amongst others.
To recap, I was trying to ask why, for example, the expression 0.5 % 0.1 doesn't result in approximately zero but instead gives (approximately) 0.1. Many respondents blah on about how most numbers can't be exactly represented and so on but fail to actually explain why, for certain values, the result from the % operator is so far from zero when there is no remainder. It took me a long time to work out what was happening and I think it's worth sharing. Also, it explains why I've asked my question.
It seems that the % operator doesn't result is zero when it should if ths divisor is rounded up in it's floating point format but the dividend isn't. The division algorithm iteratively subtracts the divisor from the dividend until it would result in a negative value. The quotient is the number of iterations and the remainder is what's left of the dividend. It may not be immediately clear why this results in errors (it certainly wasn't to me) so I'll give an example.
For the 0.5 % 0.1 = (approximately) 0.1 case, 0.5 can be represented exactly, but 0.1 cannot and is rounded up. In binary 0.5 is represented simply as 0.1, but 0.1 in binary is 0.00011001100... repeating last 4 digits. Because of the way the floating point format works, this gets truncated to 23 digits (in single precision) after the initial 1. (See the much cited What Every Computer Scientist Should Know About Floating-Point Arithmetic for a full explanation.) Then it's rounded up, as this is closer to the 0.1(decimal) value. So, the values that the division algorithm works with are:
0.1 0000 0000 0000 0000 0000 000 --> 0.5 (decimal), and
0.0001 1001 1001 1001 1001 1001 101 --> 0.1 (decimal)
The division algorithm iterations are;
(1) 1.00000000000000000000000 - 0.000110011001100110011001101 =
(2) 0.011001100110011001100110011 - 0.000110011001100110011001101 =
(3) 0.01001100110011001100110011 - 0.000110011001100110011001101 =
(4) 0.001100110011001100110011001 - 0.000110011001100110011001101 =
(x) 0.0001100110011001100110011 - 0.000110011001100110011001101 =
-0.000000000000000000000000001
As shown, after the 4th iteration further subtraction would result in a negative, so the algorithm stops and the value of the dividend left over (in bold) is the remainder, the approximation of decimal 0.1.
Further, the expression 0.6 % 0.1 works as expected as 0.6 gets rounded up. The expression 0.7 % 0.1 doesn't work as expected and although 0.7 can't be represented exactly, it doesn't get rounded up. I've not tested this exhaustively but I think this is what's going on. Which brings me (at last!) to my actual question:
Does anyone know of simple way to tell if a particular number will be rounded up?