0
[Test]
public void Calculation()
{
  decimal a = 400m;
  decimal b = 12m;
  decimal c = 2m;
  var result = a / b / c;

  Assert.AreEqual(result, 400m / 24m);
}

Test Outcome: Failed
Result Message:
Expected: 16.666666666666666666666666666m
But was : 16.666666666666666666666666667m

Why are these two decimals different?
What can I use instead of 400m / 24m to make it equal to result?

Butters
  • 947
  • 5
  • 16
  • 25
  • 5
    [What Every Computer Scientist Should Know About Floating-Point Arithmetic](http://docs.oracle.com/cd/E19957-01/806-3568/ncg_goldberg.html) – crashmstr Jan 05 '15 at 16:31
  • 2
    Decimal can't represent 1/3 exactly. Either use epsilon comparisons (embrace the errors), or use some kind of `BigRational` type (potentially high memory usage and low performance). – CodesInChaos Jan 05 '15 at 16:31

1 Answers1

1

Because decimal, just as float or double, has limited precision. Try

1m/3m + 1m/3m + 1m/3m

you'll get 0.999...9, not 1.

It differs from double by base of exponent, and by size. It's between double and quad when it comes to precision. The only difference between decimal and double is:

  • decimal is bigger (128 bits)
  • decimal is base-10 (i.e. 2*5), while double is base-2.

It can represent numbers exactly when dividing by prime factors 2 or 5, otherwise result will be a repeating decimal number. 12 has prime factor 3, so result of this division will not be exact.

Use 400m/12m/2m instead. I guarantee it will be exactly equal to result.

user2622016
  • 6,060
  • 3
  • 32
  • 53