7

If you put the following code in your compiler the result is a bit bizar:

decimal x = (276/304)*304;
double y = (276/304)*304;

Console.WriteLine("decimal x = " + x);
Console.WriteLine("double y = " + y);

Result:

decimal x = 275.99999999999999999999999

double y = 276.0

Can someone explain this to me? I don't understand how this can be correct.

Community
  • 1
  • 1
Peter
  • 14,221
  • 15
  • 70
  • 110
  • 7
    Actually, the result of both of the first expressions is simply 0. The arithmetic on the RHS of the assignment operator is performed in the integer domain, so the bracketed expression has the value 0 in each case. – Jon Skeet Feb 17 '11 at 14:08
  • 2
    This is to be expected (given a valid expression). Not all floating point numbers can be represented exactly in binary so there will be rounding errors in calculations. You will get different rounding errors for `decimal` and `double` as their bit representations are different. This is also a duplicate question. – ChrisF Feb 17 '11 at 14:09
  • It is correct. Search SO for the tags [float] or [floating-point] and you'll see lots of other people asking the same question. – S.Lott Feb 17 '11 at 14:10
  • sorry for the duplicate Q I didn't know how to search for this, it didn't come up in the list when asking the Q. – Peter Feb 17 '11 at 14:15
  • possible duplicate of [Precision of Floating Point](http://stackoverflow.com/questions/872544/precision-of-floating-point) – ChrisF Feb 17 '11 at 14:43

3 Answers3

10

276/304 = 69/76 is a recurring "decimal" in both base 10 and base 2.

  • decimal: 0.90(789473684210526315)
  • binary: 0.11(101000011010111100)

So the result gets rounded off, and multiplying by the denominator may not result in the orginal numerator. A more commonly-cited example of this situation is 1/3*3 = 0.33333333*3 = 0.99999999.

That the double version gives the exact answer is just a coincidence. The rounding error in the multiplication just happens to cancel out the rounding error in the division.

If this result is confusing, it may be because you've heard that "double has rounding errors and decimal is exact". But decimal is only exact at representing decimal fractions like 0.1 (which is 0.0 0011 0011... in binary). When you have a factor of 19 in the denominator, it doesn't help you.

dan04
  • 87,747
  • 23
  • 163
  • 198
  • Thanks for giving me a clear explanation! So the solution is when comparing numeric values you should always round to the amount of decimals you're calculating with and you should be fine? In my case I need to verify if 275.999999999999999 == 276 so I'll just round up by 6 decimals and the result should be true. – Peter Feb 17 '11 at 14:36
  • 1
    @Peter No this doesn't work in every case either. For example imagine you're comparing 1.000000499 with 1.000000501 and round both to 6 decimals then you get 1.000000 != 1.000001 even so their difference is very small. – CodesInChaos Feb 17 '11 at 14:50
  • But the floating point rounding error in decimals could never be 1.000000499 right? It would always be the 27th decimal that's off? – Peter Feb 17 '11 at 14:59
4

Well, floating point precision isn't 100%.
See for example: http://effbot.org/pyfaq/why-are-floating-point-calculations-so-inaccurate.htm

Daniel Hilgarth
  • 171,043
  • 40
  • 335
  • 443
  • Also see this for Decimal vs Double: http://social.msdn.microsoft.com/forums/en-US/csharpgeneral/thread/921a8ffc-9829-4145-bdc9-a96c1ec174a5/ – Filip Ekberg Feb 17 '11 at 14:08
  • @Filip Pretty much every post in that thread contains big mistakes. – CodesInChaos Feb 17 '11 at 14:15
  • so why is there a different result between the two? If both are floating point variables? – Peter Feb 17 '11 at 14:17
  • 3
    One is in base 10 and one in base 2. There are numbers you can exactly represent in base 10 and not base 2. For example `0.1` can be represented exactly in `Decimal` but not `Double`. And `Decimal` throws exceptions on overflows instead of silently using infinities or NaNs. But neither can represent `1/3` exactly. – CodesInChaos Feb 17 '11 at 14:20
-2

Well, mathematically 0.99999... == 1. Have a look at http://en.wikipedia.org/wiki/0.999... I know that programtically it poses some problems, but it's not totally a floating-point issue.

JonC
  • 978
  • 2
  • 7
  • 28