I have a pretty decent understanding of IEEE 754 so this is not one of those "why does adding number a and number b result in..."-type of questions.
Rather I want to ask if I've understood the fixed-point number-format specifier correctly because it's not behaving as I would expect for some double values.
For example:
double d = 0x3FffffFFFFfffe * (1.0 / 0x3FffffFFFFffff);
Console.WriteLine(d.ToString("R"));
Console.WriteLine(d.ToString("G20"));
Console.WriteLine(d.ToString("F20"));
Both the "R"
and "G"
specifier prints out the same thing - the correct value of: 0.99999999999999989
but the "F"
specifier always rounds up to 1.0
no matter how many decimals I tell it to include. Even if I tell it to print the maximum number of 99 decimals ("F99"
) it still only outputs "1."-followed by 99 zeroes.
So is my understanding broken, and can someone point me to the relevant section in the spec, or is this behavior broken? (It's no deal-breaker for me, I just want to know.)
Here is what I've looked at, but I see nothing explaining this.
(This is .Net4.0)