2

I used to think I understand the difference between decimal and double values, but now I'm not able to justify the behavior of this code snippet.

I need to divide the difference between two decimal numbers in some intervals, for example:

decimal minimum = 0.158;
decimal maximum = 64.0;
decimal delta = (maximum - minimum) / 6; // 10.640333333333333333333333333

Then I create the intervals in reverse order, but the first result is already unexpected:

for (int i = 5; i >= 0; i--)
{
   Interval interval = new Interval(minimum + (delta * i), minimum + (delta * (i + 1));
}

{53.359666666666666666666666665, 63.999999999999999999999999998}

I would expect the maximum value to be exactly 64. What am I missing here?

Thank you very much!

EDIT: if I use double instead of decimal it seems to works properly!

Alessandro
  • 3,666
  • 2
  • 28
  • 41
  • I believe this is actually the opposite issue most people have with doubles. It seems like the double is doing the rounding while the decimal is not. – Nomad101 May 03 '13 at 09:03
  • 1
    http://stackoverflow.com/questions/1089018/why-cant-decimal-numbers-be-represented-exactly-in-binary and http://stackoverflow.com/questions/618535/what-is-the-difference-between-decimal-float-and-double-in-c – I4V May 03 '13 at 09:08
  • 1
    Instead of using delta to store the result if you substitute your loop body like this : Interval interval = new Interval(minimum + ((maximum - minimum) * i) / 6, minimum + (((maximum - minimum) * (i + 1)) / 6)); – cvraman May 03 '13 at 09:16
  • @cvraman That's what I thought at first, but the rounding error will persist; I tried it in code real quick - same outcome. – John Willemse May 03 '13 at 09:30
  • Yes, it doesn't solve the problem.. – Alessandro May 03 '13 at 09:31

2 Answers2

2

You're not missing anything. This is the result of rounding the numbers multiple times internally, i.e. compounding loss of precision. The delta, to begin with, isn't exactly 10.640333333333333333333333333, but the 3s keep repeating endlessly, resulting in a loss of precision when you multiply or divide using this decimal.

Maybe you could do it like this instead:

for (decimal i = maximum; i >= delta; i -= delta)
{
   Interval interval = new Interval(i - delta, i);
}
John Willemse
  • 6,608
  • 7
  • 31
  • 45
  • Thank you, now I understand this is a completely different problem from the floating point one. But, is there a solution for my specific snippet (other than setting the boundaries by hand, obviously)? – Alessandro May 03 '13 at 09:11
  • Well, if you can work with doubles instead, that would solve it like you said in the OP, but otherwise you're stuck with checking the outcomes and applying your own rounding to it I'm afraid. – John Willemse May 03 '13 at 09:17
  • No, it is not possibile, so I will round up the boundaries by hand! :) – Alessandro May 03 '13 at 09:22
  • If you need exact arithmetic with rationals (exact fractions), you need something [like this](http://www.codeproject.com/Articles/88980/Rational-Numbers-NET-4-0-Version-Rational-Computin), there's nothing built-in. – AakashM May 03 '13 at 09:31
0

Double has 16 digits precision while Decimal has 29 digits precision. Thus, double is more than likely would round it off than decimal.

Edper
  • 9,144
  • 1
  • 27
  • 46