2
public static void Main()
{
    Dictionary<string, double> values = new Dictionary<string, double>();
    values.Add("a", 0.002);
    values.Add("b", 0.003);
    values.Add("c", 0.012);

    // Summing iteratively.
    double v1 = 615.0;
    foreach (KeyValuePair<string, double> kp in values)
    {
        v1 += kp.Value;
    }

    Console.WriteLine(v1);

    // Summing using the Sum method.
    double v2 = 615.0;
    v2 += values.Values.Sum();

    Console.WriteLine(v2);

    Console.ReadLine();
}

When I look at the value of v1 in the debugger it gives a value of 615.01699999999994 but for v2 it gives a value of 615.017. For some reason the Sum method yields an accurate result whereas summing them iteratively does not. (When I print the two values they are the same, but I presume this is due to some rounding that the WriteLine method does.)

Anyone know what is going on here?

dlev
  • 48,024
  • 5
  • 125
  • 132
TheBoss
  • 1,994
  • 4
  • 16
  • 21
  • It has to do with you using double instead of decimal. – Gabriel GM Apr 07 '12 at 16:39
  • 2
    Ain't floating point arithmatic grand? Take a look at this topic because I think you'll find a good answer there: http://stackoverflow.com/q/803225/1243316. – Brad Rem Apr 07 '12 at 16:39
  • 1
    @TheBoss see the same effect `double d1 = 615.0 + (0.002 + 0.003 + 0.012); double d2 = 615.0 + 0.002 + 0.003 + 0.012;` – L.B Apr 07 '12 at 16:42

2 Answers2

6

Floating point math is inherently not 100% accurate and has error. The order in which you add different numbers together can affect how much floating point error there is. If it's important for these calculations to be completely accurate you should use decimal, not double.

This doesn't have anything to do with using Sum vs. manually summing the data. In the first you add each number to 615 as you go, in the second you add all of the numbers to each other an then add them to 615. It's a different ordering of adding the same data. Depending on which numbers you use either method could potentially result in more or less error.

Servy
  • 202,030
  • 26
  • 332
  • 449
  • So is there any way of knowing the order which they must be summed to have the least floating point error? Unfortunately, the decimal type is also inaccurate, not for the above example, but for the program which I am making. It is indeed less inaccurate, but not as accurate as the Sum method. – TheBoss Apr 07 '12 at 17:42
  • 3
    @TheBoss: Is there any way of knowing what order they must be summed in to have the *least* error? **Yes.** If you are performing a lot of calculations and need to be able to estimate error bounds or minimize them then I recommend taking an undergraduate-level course in *numerical methods*, or obtaining the text book for such a course and studying it. As a rough rule of thumb: **add the small things together first**. That does not guarantee minimized error but it is a good first approximation. – Eric Lippert Apr 07 '12 at 18:07
  • Decimal is similarly "inaccurate". Floating point and decimal are each better suited to certain types of task, but neither is more or less accurate than the other. – phoog Apr 09 '12 at 19:13
  • @phoog I was referring to the example numbers given, not in general. Another answerer has already linked to a discussion of the various types, and the OP has said that decimal won't work due to problems in code not posted here, so I didn't bother to discuss it at length. – Servy Apr 09 '12 at 19:25
0

The problem with double/float's is that they are binary numbers, aka 1000110.10101001 internally, and therefore only represent an approximate representation of your values.

Read Jon Skeet's explanation: Difference between decimal, float and double in .NET?

Community
  • 1
  • 1
Chuck Savage
  • 11,775
  • 6
  • 49
  • 69
  • Decimals and ints are also binary numbers represented with 1's and 0's internally; the critical difference is that decimals use a base-10 system for indicating fractional values, while floats and doubles use base 2. – phoog Apr 09 '12 at 19:15