0

I'm managing a piece of legacy code that uses float variables to manages amount of money and this cause some approximation issue.

I know that this is not the correct way to represent money and that should be used the BigDecimal type but the refactoring of all the legacy code requires a lot of time and meanwhile I would like to understand what is the worst error introduced by this approximation?

Also a link point to some theoretical document that explain in a detailed (but understandable) manner the problem (and how to estimate the worst case error)

Any help would be appreciated.

user1
  • 4,031
  • 8
  • 37
  • 66
user2572526
  • 1,219
  • 2
  • 17
  • 35
  • 2
    Hard to say. The errors compound, so it very much depends upon the calculation you do and how many times you do it. – Andy Turner Oct 21 '15 at 08:20
  • 1
    The extent of the error is probably proportional to (i) the number of operations on each float (the more arithmetic you do the larger the potential error) and (ii) the amounts (the larger the amounts, the larger the errors) – assylias Oct 21 '15 at 08:20
  • As an example of (ii) above, 1,000,000,095 (1 billion and change) as a float is 1,000,000,064, or an error of more than 30 units of currency (say dollars). – assylias Oct 21 '15 at 08:28
  • The *least* approximation error is one cent , which is already too much. – user207421 Oct 21 '15 at 08:41
  • Thanks for the answers but what I would like to know is something more theoretical. For example I know that the java float has a mantissa of 7 digit. Can I assume that ALL the values that requires 7 or less digit are perfectly represented? Furthermore can I assume that the error is always smaller than an unit of the 7th most significant digit? – user2572526 Oct 21 '15 at 08:48
  • No, even with a few digits (like 3) representation won't be accurate anymore. Play a bit with http://www.exploringbinary.com/floating-point-converter/ to see it for yourself – user158037 Oct 21 '15 at 10:01

0 Answers0