It is also obvious that 3.3 * 2.0
is numerically identical to 6.6
. The latter computation is nothing more than an increment of the binary exponent as it is the result of a multiplication with a power of two. You can see this in the following:
| s exponent significant
----+-------------------------------------------------------------------
1.1 | 0 01111111111 0001100110011001100110011001100110011001100110011010
2.2 | 0 10000000000 0001100110011001100110011001100110011001100110011010
3.3 | 0 10000000000 1010011001100110011001100110011001100110011001100110
6.6 | 0 10000000001 1010011001100110011001100110011001100110011001100110
Above you see the binary representation of the floating point numbers 3.3
and 6.6
. The only difference in the two numbers is the exponent since they are only multiplied with two. We know that IEEE-754 will:
- approximate a decimal number with the smallest numerical error
- can represent all integers up to
2^53
exactly (for binary64)
So since 2.0
is exactly representable, a multiplication with this number will be nothing more than a change in the exponent. So all the following will create the same floating point numbers:
6.6 == 0.825 * 16.0 == 1.65 * 4.0 == 3.3*2.0 == 13.2 * 0.5 == ...
Does this mean that 2.2*3.0
is different from 6.6
because of the significant? No, this was just due to rounding errors in the multiplication.
An example where it would have worked would have been 5.5*2.0 == 2.2*5.0 == 11.0
. Here the rounding was favourable