-1

I am trying to make a program that subtracts decimal values from a number but when I input certain values it returns a long series of decimals instead of the correct value.

I am trying to subtract 2.9 from 3 and instead of getting 0.1 as the value I am getting 0.099999999999. I have tried playing with the values of both the starting and subtracting numbers. Every time however there is a value I subtract that gives me a result like this and breaks the code. Is there a way to stop this from happening/

ws23
  • 1
  • 1
    If this isn't the most repeatedly asked question on SO, it's got to be close. – Woodford Feb 17 '23 at 17:17
  • If you start with strings and cast directly to Decimal you get your result, but yes in general floating point math often "seems" broken. `print(decimal.Decimal("3.0") - decimal.Decimal("2.9"))` – JonSG Feb 17 '23 at 17:24
  • Are you always dealing with single digit precision after the decimal? applying `round(x,1)` might help you get a result that looks more like you want. – JonSG Feb 17 '23 at 17:30
  • Do you know [what every computer scientist should know about floating-point arithmetic](https://dl.acm.org/doi/10.1145/103162.103163)? – Friedrich Feb 17 '23 at 17:32

1 Answers1

-1

Computer hardware has limitations when it comes to floating point arithmetic. Check the docs here for more information but the opening is as follows:

Floating-point numbers are represented in computer hardware as base 2 (binary) fractions. For example, the decimal fraction 0.125 has value 1/10 + 2/100 + 5/1000, and in the same way the binary fraction 0.001 has value 0/2 + 0/4 + 1/8. These two fractions have identical values, the only real difference being that the first is written in base 10 fractional notation, and the second in base 2.

Unfortunately, most decimal fractions cannot be represented exactly as binary fractions. A consequence is that, in general, the decimal floating-point numbers you enter are only approximated by the binary floating-point numbers actually stored in the machine.