3


I have a float holding a very important value, which has to be VERY exact.
The problem I have is I'm changing the value of the float ALWAYS only + and - (No division here).
After changing the value about 20 times, the value isn't 0.05 as expected, its 0.0499989.
Why is this? Im not casting the float, I'm not dviding it, but it magically changes from 0.05 to 0.049989.
I tried Math.Round(value, 2); but it doesn't return 0.05 either. What should I do now??

mskfisher
  • 3,291
  • 4
  • 35
  • 48
alex
  • 1,228
  • 1
  • 16
  • 38
  • What do you mean "by design". How can a value change by Design?? – alex Dec 27 '10 at 23:27
  • 2
    possible duplicate of [Why is floating point arithmetic in C# imprecise?](http://stackoverflow.com/questions/753948/why-is-floating-point-arithmetic-in-c-imprecise) – Michael Dec 27 '10 at 23:29
  • it doesn't magically change it's never 0.05. See the reason for this below – Rune FS Dec 27 '10 at 23:35
  • 2
    "I have a float holding a very important value, which has to be VERY exact". Floating point representation has a wide range, but limited precision. If you need VERY exact values, you either need a different data type, or you need to be VERY careful with the operations you perform. And it's not necessarily a simple problem to solve. I recall my Numerical Analysis (http://en.wikipedia.org/wiki/Numerical_analysis) classes, which address these issues, as being among the tougher Computer Science classes I took. – Michael Burr Dec 27 '10 at 23:41

4 Answers4

10

Floating point variables in many (if not most) languages only hold an imprecise approximation of the actual value. You can solve the issue in C# by using the decimal data type. See this SO question.

Community
  • 1
  • 1
Jeremy Fuller
  • 3,391
  • 1
  • 32
  • 27
  • Thank you for the quick answer. It really worked with `decimal`. – alex Dec 28 '10 at 00:25
  • 10
    Floats and decimals *both* have representation error, and decimals actually have *larger* average representation error on a per-bit-of-precision basis. The difference between them is that the quantities which have *zero* representation error in a decimal are quantities that you are likely to want to represent when doing financial calculations. – Eric Lippert Dec 28 '10 at 01:18
4

Float and double values are stored in binary (base 2).
Therefore, they cannot accurately represent numbers like .3 that have no finite-length representation in binary.

Similarly, a decimal, which is stored in base 10, cannot accurately represent numbers like 1/3 that have no finite-length representation in decimal.

You need an arbitrary-precision arithmetic library.

SLaks
  • 868,454
  • 176
  • 1,908
  • 1,964
  • 4
    Arbitrary precision math libraries don't solve this problem, since they use (arbitrary) *finite* precision, and `0.05` cannot be represented in any finite number of binary digits. What is actually needed is either a symbolic representation, or a rational representation, or for the questioner to educate himself about floating-point and adjust his algorithm appropriately. – Stephen Canon Dec 27 '10 at 23:37
2

The problem is that with float some fractional numbers cannot be exactly represented. Consider using the decimal data type if you only use + and -, you shouldn't have that problem, since decimal uses a base 10 internally.

BrokenGlass
  • 158,293
  • 28
  • 286
  • 335
2

Just like an int variable can only hold integers in a certain range, a float can only hold certain values. 0.05 is not one of them.

If you set an int variable to (say) 3.4, it won't actually hold the value 3.4; it will hold that value converted to a representable int value: 3.

Similarly, if you set a float variable to 0.05, it won't get that exact value; it will instead get that value converted to the closest value representable as a float. This is what you are seeing.

Stephen Canon
  • 103,815
  • 19
  • 183
  • 269