3

I am developing a program which does some floating-point calculations, and I stumbled upon an interesting rounding issue in .NET, according to which expression:

0.1 + 0.2 == 0.3

evaluates to false, because:

0.1 + 0.2

evaluates to 0.30000000000000004, and not 0.3. That pretty severely affects unit testing.

I do understand why that happens, however what I'm interested to know is: what best practices should I be following when dealing with double arithmetic in order to avoid such problems where possible?

EDIT: using decimal type does not help

SUMMARY: I appreciate all for commenting. Unfortunately, some of you assumed that this question is how to make 0.1 + 0.2 to be equal 0.3, and that is not what I asked for. I accept that floating arithmetic can return value with variation. I was asking what common strategy is it a best practice to follow so that this variation does not cause issues. I think this question is ready to be closed.

galets
  • 17,802
  • 19
  • 72
  • 101
  • 2
    If you expect `0.1 + 0.2 == 0.3` to evaluate to `true` use `decimal`. – Tim Schmelter Feb 13 '14 at 22:03
  • 1
    Don't test for exact equality. Test to see if the result is within an acceptable error range, e.g. `Assert(Math.Abs((0.1 + 0.2) - (0.3)) < 1e-10)` – p.s.w.g Feb 13 '14 at 22:04
  • Also note if using `decimal` to use the correct suffix on literals: `0.1M + 0.2M == 0.3M` – ZoolWay Feb 13 '14 at 22:04
  • The decimal type should allow for exact arithmetic and comparison in your example. – President James K. Polk Feb 13 '14 at 22:07
  • 1
    @GregS You miss the point. The calculation under test is performed using binary floating point. That's by design. – David Heffernan Feb 13 '14 at 22:08
  • 1
    @DavidHeffernan If the literals are decimals it works: `((0.1M+0.2M)==0.3M)` evaluates true – ZoolWay Feb 13 '14 at 22:12
  • @ZoolWay Nobody (apart from MS) unit tests the built in addition operator for decimal operands. Are you saying you would write this exact code: `Assert.AreEqual(0.1m + 0.2m, 0.3m)`? Of course not. We are talking about code under test that performs binary floating point calculations. – David Heffernan Feb 13 '14 at 22:15
  • 1
    @galets FWIW, this is not an issue of rounding, rather one of representability. None of the values you present, `0.1`, `0.2` and `0.3` can be exactly represented. So you are not adding `0.1` to `0.2`. You are adding the closest representable value to `0.1` to the closest representable value to `0.2`. And then rounding to the closest representable value. Which may not be the closest representable value to `0.3`. It's all about representability. – David Heffernan Feb 13 '14 at 22:18
  • 2
    @ZoolWay decimals might work for the exact code in question, but that doesn't solve OP's general problem. `(1m/3m) * 3m == 1m` → `false` – p.s.w.g Feb 13 '14 at 22:23
  • @p.s.w.g Creating a infinite decimal number by divison is a very nice example against it, nice :) That can only be solved with tolerance. – ZoolWay Feb 13 '14 at 22:25
  • 1
    @zool the equality would hold if we used ternary floating point. Remember that .net decimal is just base 10 floating point. – David Heffernan Feb 13 '14 at 22:27
  • @DavidHeffernan: I have more trouble reading the minds of posters than you do. It was worth pointing out that decimal arithmetic can be done exactly. This would be appropriate for sums of money for example. – President James K. Polk Feb 14 '14 at 00:12
  • @GregS binary arithmetic can be done exactly, providing the values are representable. Decimal arithmetic is onlt exact for representable values. It's still inexact floating point. Just to base 10 rather than base 2. – David Heffernan Feb 14 '14 at 05:21

3 Answers3

5

You typically test for equality up to a certain tolerance.

So, in NUnit, for instance, you might write:

Assert.AreEqual(x, y, tol);

where tol is a suitably chosen tolerance value. Other unit testing frameworks will have similar assert functionality for floating point values.

Of course, how you choose the tolerance is, potentially, an enormous topic. Briefly, in order to know what tolerance to use, you need to know something about the calculation under test. Analysis of that calculation would be performed to decide on a suitable tolerance.

David Heffernan
  • 601,492
  • 42
  • 1,072
  • 1,490
3

Equality is calculated with some kind of episilon when using floating point types.

Compare to this answer: Floating point comparison functions for C#


Decimal is an option but filling it must be done correctly: decimal a = 0.1M

((0.1M+0.2M)==0.3M) evaluates true!

Community
  • 1
  • 1
ZoolWay
  • 5,411
  • 6
  • 42
  • 76
  • 1
    Decimal is not an option when the code under test uses binary floating point. – David Heffernan Feb 13 '14 at 22:16
  • 2
    Yes, it is true that decimal is not an option when the values you have got already are floating points. But that is not in the question, just best-practise for how to handle floating comparison. Best practise is an epsilon/tolerance-comparison - or when possible use a not-so-floating type ;) – ZoolWay Feb 13 '14 at 22:22
  • Yes that really is the question. The question is about comparing binary floating point values. – David Heffernan Feb 13 '14 at 22:25
  • Reading the title - point taken! – ZoolWay Feb 13 '14 at 22:27
1

Two things you need to read:

And don't forget this treasure trove of ruefully acquired floating point wisdom: http://randomascii.wordpress.com/category/floating-point/

[GOLDBERG91]
Goldberg, David, 1991. "A simple but realistic model of floating-point computation." ACM Computing Surveys, vol. 23, no. 1, March 1991. pp. 5-48.

Nicholas Carey
  • 71,308
  • 16
  • 93
  • 135