0

I am working with a lot of mathematical operations, mostly with divisions, and its really important that after all calculations has been done, all the numbers match with the initial state. For example I am testing the following code for doubles in C#:

        double total = 1000;
        double numberOfParts = 6;

        var sumOfResults = 0.0;
        for (var i = 0; i < numberOfParts; i++)
        {
            var result = total / numberOfParts;
            sumOfResults += result;
        }

This code gives a result of sumOfResults: 999.9999999999999, and i expect 1000. The similar problem happens when using decimals, just it's more precise:

        decimal total = 1000m;
        decimal numberOfParts = 6m;

        var sumOfResults = decimal.Zero;
        for (var i = 0; i < numberOfParts; i++)
        {
            var result = total / numberOfParts;
            sumOfResults += result;
        }

Expected sumOfResults to be 1000M, but found 1000.0000000000000000000000001M. So even here when i need to compare with the initial state of 1000, I will never have the same state as before the division of numbers.

I am aware of the Numerical Analysis field and everything, but is there some library, that will help me to get the exact number of 1000 after sum of all the division results?

  • You'd need something that stores values as fractions of integers (and you'd have to not do anything with irrational numbers like Pi). However seeking libraries is off topic for Stack Overflow. Alternatively you can do comparisons within a precision instead of trying to do exact matching. `Match.Abs(actual - expected) < epsilon` where `epsilon` is some small number. – juharr May 14 '21 at 13:28
  • 1
    Either use *rational* type (`say, BigRational` https://searchcode.com/codesearch/view/8725703/) or compare `double` / `decimal` value with *tolerance*: `bool equal = Math.Abs(expected - actual) <= tolerance;` – Dmitry Bychenko May 14 '21 at 13:30
  • Thx for the BigRational link, this looks good. Btw I use technique like bool equal = Math.Abs(expected - actual) <= tolerance; but these tolerances accumulated over the millions of operations i am doing and they make problems. – makigjuro May 14 '21 at 13:35
  • See [recent post on C# precision](https://stackoverflow.com/a/67496138/13813219) – JAlex May 14 '21 at 13:58
  • 1
    If you represent the circumference of the earth with a double, then the precision equals 7 nano-meters. Is that not enough? Or is it a visibility issue where you want the numbers to show to the nearest integer? – JAlex May 14 '21 at 15:22

2 Answers2

2

If you are looking for exact result you have to work with rational numbers; there are plenty of assemblies which implement BigRational type; say, you can try using my own HigherArithmetics (it's for .Net 5):

  using HigherArithmetics.Numerics;

  ... 

  BigRational total = 1000;
  BigRational numberOfParts = 6;

  BigRational sumOfResults = 0;
  
  for (var i = 0; i < numberOfParts; i++) {
    var result = total / numberOfParts;
    sumOfResults += result;
  }

  Console.Write(sumOfResults);

Outcome:

  1000

If you want, however, to use standard double or decimal type you have to compare with tolerance:

  double tolerance = 1e-6; 

  double total = 1000;
  double numberOfParts = 6;

  double sumOfResults = 0;
  
  for (var i = 0; i < numberOfParts; i++) {
    var result = total / numberOfParts;
    sumOfResults += result;
  }

  sumOfResults = Math.Abs(total, sumOfResults) <= tolerance
    ? total
    : sumOfResults;

  Console.Write(sumOfResults);

Finally, yet another possibility is to round the answer:

  sumOfResults = Math.Round(sumOfResults, 6);
Dmitry Bychenko
  • 180,369
  • 20
  • 160
  • 215
  • Great, the BigRational is what I am looking for. Thx a lot! – makigjuro May 14 '21 at 13:46
  • You tolerance should a multiple of `1ulp` which for a value `x` is calculated with `eps(x) = 2^(floor(log(x,2))-52)`. – JAlex May 14 '21 at 14:00
  • @JAlex: not necessary; when we add up *several* unexact values, the total error can well appear greater than `eps` – Dmitry Bychenko May 14 '21 at 14:08
  • @DmitryBychenko - I agree. I should have said **several** multiples of `1ulp` . Depending on now may bits have lost precision. The factor of `1ulp` for error might be in the thousands for regular math (and usually a power of 2), and with trigonometric functions it approaches hundreds of millions of `u1lp`. – JAlex May 14 '21 at 15:51
1

This is more of human perception than actually a numeric problem. Almost every floating point number is inaccurate due to machine precision. Mathematically the difference between 1000.0 and 999.99999999999997 is sufficient for most operations.

The solution might seem odd to you, but it works to solve the inaccuracy anxiety from computation readouts.

    double total = 1000;
    double numberOfParts = 6;

    var sumOfResults = 0.0;
    for (var i = 0; i < numberOfParts; i++)
    {
        var result = total / numberOfParts;
        sumOfResults += result;
    }

    Console.WriteLine((float)sumOfResults);
    // 1000

Simply reduce the precision for human readable output You saw how increasing precision makes things worse, so go the other way around. The system does this to extent as the double.ToString() function rounds the least significant bit off.

Or you can control the number of significant digits to show with the 'gXXX' format specifier.

    Console.WriteLine($"{sumOfResults:G5}");
    // 1000

In summary, the "issue" you see is common in all computers that have IEEE 754 floating point types and most numbers are not represented exactly.

For example Math.PI below is shown as defined and as shown in C#

Environment Value
π 3.141592653589793238462643383279502884...
.NET 5 3.141592653589793
Framework v4.8 3.141592653589791
JAlex
  • 1,486
  • 8
  • 19