I am developing a program which does some floating-point calculations, and I stumbled upon an interesting rounding issue in .NET, according to which expression:
0.1 + 0.2 == 0.3
evaluates to false, because:
0.1 + 0.2
evaluates to 0.30000000000000004
, and not 0.3
. That pretty severely affects unit testing.
I do understand why that happens, however what I'm interested to know is: what best practices should I be following when dealing with double arithmetic in order to avoid such problems where possible?
EDIT: using decimal
type does not help
SUMMARY: I appreciate all for commenting. Unfortunately, some of you assumed that this question is how to make 0.1 + 0.2 to be equal 0.3, and that is not what I asked for. I accept that floating arithmetic can return value with variation. I was asking what common strategy is it a best practice to follow so that this variation does not cause issues. I think this question is ready to be closed.