0

Possible Duplicate:
Why is floating point arithmetic in C# imprecise?

If I loop through numerous random doubles, and "round" them to two fractional digit places, each individual round appears to be correct (0.02, 0.01, 0.00, etc).

However, there appears to be a very small fractional part that is kept along with the round.

double total = 0;

for (int i = 0; i < 10000; i++)
{
    total += Math.Round(random.NextDouble() * 0.02, 2);
}

Console.WriteLine(total);

Sample Outputs:

100.600000000006

99.7400000000059

Anyone care to explain why this happens is a more intuitive way?

Community
  • 1
  • 1
Sean S
  • 485
  • 1
  • 5
  • 14

2 Answers2

4

System.Double and System.Float are base 2 floating point types. There are many finite decimal values that have an infinite representation in base 2, much as 1/3 has an infinite representation in base 10. Therefore, when you round to such a value, the binary representation is approximate. To avoid this problem, use the decimal type, which is a base 10 floating point type.

There must be 100 duplicates of this question on stackoverflow, but I am on my phone, which makes it inconvenient to find them and link to them.

For more information, look at the wikipedia article for IEEE double.

Many will say that doubles are "not exact", which is false. Every double value represents an exact value that can be represented exactly in base 10 (except NaN and infinity, of course). That's because 2 is one of the prime factors of 10. The only approximation is when you try to represent certain decimal fractions (or other rational numbers whose denominator has at least one prime factor other than 2).

The best way to understand this, for me at least, is to work out, on paper, the binary representations of a few fractions. For example, try 0.5, 0.625, 3.25, 5/16, 1/3, 0.2, and 0.3.

phoog
  • 42,068
  • 6
  • 79
  • 117
0

Doubles do not store base 10 numbers- they store a value in base 2, and so when storing fractional numbers they can exhibit small differences from the expected decimal value. For what it's worth, this is not unique to base 2. Base 10 (and really all base-N systems) has the same problem- take for example 1/3. In base 10 you end up representing this as something like 0.3333333(...), but there is no way to perfectly represent 1/3 in base 10.

In your example, you can experience small errors in the representation of the fractional portion of the number, and because you are adding these together you may see these small errors accumulate. Using my example above, if you round .333333(...) to 2 decimal places, you get .33, but that has a fairly substantial inaccuracy relative to the actual value of 1/3. Accumulating these inaccuracies is a common mistake when doing floating point math.

As @Phoog writes, there are many explanations of this on SO. Here's one: Why is floating point arithmetic in C# imprecise?

Community
  • 1
  • 1
Chris Shain
  • 50,833
  • 6
  • 93
  • 125