It's just a matter of what double.ToString()
does. Here's a short but complete program demonstrating the same thing:
using System;
public class Test
{
static void Main(string[] args)
{
// Find the largest double less than 3
long bits = BitConverter.DoubleToInt64Bits(3);
double a = BitConverter.Int64BitsToDouble(bits - 1);
double b = Math.Floor(a);
// Print them using the default conversion to string...
Console.WriteLine(a.ToString() + " " + b.ToString());
// Now use round-trip formatting...
Console.WriteLine(a.ToString("r") + " " + b.ToString("r"));
}
}
Output:
3 2
2.9999999999999996 2
Now double.ToString()
is documented with:
This version of the ToString method implicitly uses the general numeric format specifier ("G") and the NumberFormatInfo for the current culture.
... and the general numeric format specifier docs state:
The precision specifier defines the maximum number of significant digits that can appear in the result string. If the precision specifier is omitted or zero, the type of the number determines the default precision, as indicated in the following table.
... where the table shows that the default precision for double
is 15. If you consider 2.9999999999999996 rounded to 15 significant digits, you end up with 3.
In fact, the exact value of a
here is:
2.999999999999999555910790149937383830547332763671875
... which again, is 3 when regarded with 15 significant digits.