0

Hey I was trying a test method for a WCF soap web service.

public Double TestDouble(Double x) { return x; }

The test tool only let's me put 15 significant digits:

enter image description here

I can use Soap UI to add more digits, here's one with 17 significant figures:

   <soapenv:Header/>
   <soapenv:Body>
      <td:TestDouble>
         <!--Optional:-->
         <td:x>13.075815372878123</td:x>
      </td:TestDouble>
   </soapenv:Body>

In general, clients tend to throw however many sig figs they want, so this was just a simple test to see some discrepant numbers coming back from the service.

The result is also 17 figures but slightly higher, so the input doesn't match the output (when it should?):

   <s:Body>
      <TestDoubleResponse xmlns="http://ocdusrow3rndd1">
         <TestDoubleResult>13.075815372878124</TestDoubleResult>
      </TestDoubleResponse>
   </s:Body>

The input to the web service, when run in debug mode seems to have received the correct original value:

enter image description here

So how is it being changed before given back?

user17753
  • 3,083
  • 9
  • 35
  • 73
  • Your question is answered by asking _"What is the precision of a `double` in C# / .NET"_, which is answered in [Precision of double after decimal point](http://stackoverflow.com/questions/12089817/precision-of-double-after-decimal-point). – CodeCaster Feb 11 '14 at 19:44

2 Answers2

1

double does not store the exact number, instead it stores an approximation. So, when showing the value again, the represented number can have the least significant numbers changed.

From MSDN:

Just as decimal fractions are unable to precisely represent some fractional values (such as 1/3 or Math.PI), binary fractions are unable to represent some fractional values. For example, 1/10, which is represented precisely by .1 as a decimal fraction, is represented by .001100110011 as a binary fraction, with the pattern "0011" repeating to infinity. In this case, the floating-point value provides an imprecise representation of the number that it represents. Performing additional mathematical operations on the original floating-point value often tends to increase its lack of precision. For example, if we compare the result of multiplying .1 by 10 and adding .1 to .1 nine times, we see that addition, because it has involved eight more operations, has produced the less precise result. Note that this disparity is apparent only if we display the two Double values by using the "R" standard numeric format string, which if necessary displays all 17 digits of precision supported by the Double type.

Doug
  • 6,322
  • 3
  • 29
  • 48
-1

From what I have read double's accuracy is only 16 decimal digits. So try using decimal instead of double

When to use and not to use double is discussed here When should I use double instead of decimal?

Community
  • 1
  • 1
ElectricRouge
  • 1,239
  • 3
  • 22
  • 33
  • __double__ has no fixed accuracy: is isn't a number representation that is decimal based. Check the MSDN link in @Doug answer – Askolein Aug 03 '15 at 14:35
  • @Askolein It says, "All floating-point numbers also have a limited number of significant digits, which also determines how accurately a floating-point value approximates a real number. A Double value has up to 15 decimal digits of precision, although a maximum of 17 digits is maintained internally." – ElectricRouge Aug 04 '15 at 12:35
  • @ElectricRouge3 I get your point. But this is an internal representation precision, not precision linked to the real value of the number itself. Roughly said, it explains that the "rounded" representation based on a double has that precision limit. But in almost all cases, the real precision will be far less than that. This is what OP experienced. – Askolein Aug 04 '15 at 16:07