Using IEEE 754 rounding, let's see what's going on.
In IEEE 754 single-precision floating point, the value of a finite number is dictated by the following:
-1sign × 2exponent × (1 + mantissa × 2-23)
Where
- sign is 0 if positive, otherwise 1;
- exponent is a value between -126 and 127 (-127 and 128 are special); and
- mantissa is a value between 0 and 8388607 (because it's a 23 bit integer).
If we substitute sign with 0 and exponent with -2, then we're guaranteed a value between 0.25 and 0.5. Why?
1 × 2-2
is ¼. The value of
1 + mantissa × 2-23
is guaranteed to be between 1 and 2, so that's our sign and exponent sorted.
Moving on, we can work out fairly quickly that there are two values which can be used as the mantissa value: 2796202 and 2796203.
Substituting each in, we get the following two values (one lower, one higher):
- 0.333333313465118408203125 (for mantissa = 2796202)
- 0.3333333432674407958984375 (for mantissa = 2796203)
The binary representation of the exact value (up to 22 digits) is:
1010101010101010101010...
As the next digit would be 1
, that would mean the value rounds up, not down. For this reason, the higher one has a less significant error than the lower one:
- 0.333333313465118408203125 - ⅓ ≑ -1.987 × 10-8
- 0.3333333432674407958984375 - ⅓ ≑ 9.934 × 10-9
And since it's larger than the exact value, when multiplied back it will be more than 1. That's why it uses a value that appears off initially -- binary rounding sometimes goes in the opposite direction of decimal rounding.