Ideally, a string-to-double method would always yield the double
whose value was closest to the exact numerical value of the specified string; for example, since "102030405060708072.99" is only 7.01 away from the next larger value, but 8.99 away from the next smaller value, it should round to the next higher value. Neither Convert.ToDouble
nor String.Parse
seems to work that way, however; both cause that value to be rounded down.
It appears that the behavior is to ignore everything past the eighteenth digit, even if values beyond that cold affect how things are rounded. Is there anything that specifies such behavior? Is there anything that specifies that every decimal representation of 18 digits or fewer will always be mapped to double
value which is no more than half an LSB away from the precise represented value, notwithstanding the fact that numbers of 18 digits or more may be mapped to a value which is nearly 9/16 of an LSB away from the indicated value?
Based upon Why is a round-trip conversion via a string not safe for a double? it would appear that the behavior of conversion routines may vary between platforms. To what extent is such behavior a deviation from spec, and to what extent does the spec leave such behaviors open? Is there anything in any specification which would indicate that adding a 0
to the end of the fractional part of a formatted double
will not affect the result of parsing?
My own expectation would be that floating-point methods should specify the degree of accuracy they promise. In the absence of any specification to the contrary, I would expect floating-point methods to yield a result which would be within 1/2lsb of the arithmetically-precise result that would be yielded with some combination of numeric operands that were within 1/2lsb of the values passed, since that level of accuracy can often be achieved for about the same cost as anything sloppier, but accuracy beyond that is often much more expensive. If a method is going to take extra time to achieve better accuracy, it should specify that (to encourage code that needs to be fast, but doesn't need the full precision, to consider a faster n alternative). Code should not require for correctness that a method it uses be more accurate than that unless the method promises better accuracy.
The R
formatting option for double
values in .NET is supposed to yield a value which, when parsed, will match the original operand, but does not specify what method of parsing should be used to achieve that result. If one wants to convert floating-point values to strings in such a way as to guarantee system-independent round-trip-ability, what must one do to ensure that any legitimate implementation of a parsing method will parse it the same way?