0

I know that computing decimals is not an easy task for a computer, but is there any way I can get simple arithmetic get done in python? What is the best approach?

>>> 0.3 + 0.3 + 0.3 + 0.1 != 1
True
>>> from decimal import Decimal
>>> Decimal(0.3) + Decimal(0.3) + Decimal(0.3) + Decimal(0.1) != 1
True
>>> Decimal(0.3) + Decimal(0.3) + Decimal(0.3) + Decimal(0.1) != Decimal(1)
True
>>> Decimal(0.3) + Decimal(0.3) + Decimal(0.3) + Decimal(0.1)
Decimal('0.9999999999999999722444243843')

Update:

As proposed, the solution would be to use Decimal() and strings instead of plain numbers. But I find this solution very unsatisfying (non pythonic and ugly). Really there is no other way? (using a decorator perhaps?)

alec_djinn
  • 10,104
  • 8
  • 46
  • 71
  • @Ev.Kounis: That is incorrect. As the answer here and in the other referenced question shows, you can use decimal arithmetic, and the correction to the OP’s code is to use strings to initialize decimal values, not binary floating-point constants. – Eric Postpischil Dec 14 '17 at 13:22
  • @Jean-FrançoisFabre: Yes, we know binary floating-point is not accurate for decimal arithmetic. How is that helpful for this question? The OP already knows there is an inaccuracy, so you have not provided any new information. – Eric Postpischil Dec 14 '17 at 13:23
  • Regarding your update: (a) You can assign your constants to named variables so that you can use those nicer names in your code instead of repeating verbose constructions like `Decimal('0.3')`. (b) It is possible to use binary floating-point to get some decimal arithmetic done correctly, but it requires expert knowledge of floating-point. It is probably not the solution you want. – Eric Postpischil Dec 14 '17 at 13:26
  • @vaultah All the other questions have the same solution, I am looking for something different. I think the community did not come out with a satisfying answer yet. – alec_djinn Dec 14 '17 at 13:28
  • @EricPostpischil So assuming you wanted to check a floating point value against another one, you would import the decimal module and convert to str instead of just supplying a tolerance or checking it some other, non-floating point-dependent way.. – Ma0 Dec 14 '17 at 13:40
  • @Ev.Kounis: The OP does not ask about comparing one floating-point value to another. – Eric Postpischil Dec 14 '17 at 13:43
  • @EricPostpischil Of course he is. – Ma0 Dec 14 '17 at 13:47
  • @Ev.Kounis: The question asks about getting mathematically correct values for decimal arithmetic. The only comparison used is for the purpose of illustration of the exact value arrived at by arithmetic; there is no indication that comparing floating-point values is actually a component of their application. In any case, your comment “the best approach is not to depend on it” is incorrect (although it is unclear what the antecedent of your “it” is—floating-point? The Decimal package?). Floating-point arithmetic is dependable if you understand it, and it is useful. – Eric Postpischil Dec 14 '17 at 13:58
  • @Ev.Kounis Eric is right. I am asking how to get correct results from decimal arithmetic AND how to do it in a better way that converting everything to string and using ugly `Decimal()` calls. I really hope there is a better way. So far, no luck. – alec_djinn Dec 14 '17 at 14:24
  • If `Decimal` were not the Pythonic way to exactly represent decimals in Python, then why would `Decimal` be part of Python? – Sneftel Dec 14 '17 at 15:04
  • @Sneftel `Decimal` is a function, they should have defined a proper decimal type like in Viper for example. I think they could have done something like `from future import decimal_type` and that all the floats in the script would have been of decimal type by default (same as they did with `division`). So, if I have 100 formulas in my code they would have just worked as expected without pushing me to add `Decimal()` and `""` all over the place. Unfortunately, the latter seems to be the only solution so far, and it is not pythonic at all. – alec_djinn Dec 14 '17 at 15:16
  • Decimal *is* a type, not a function (try `type(Decimal("0.3"))`). As for replacing all floating point constants with decimal constants, that sounds like a *really* bad idea from a performance standpoint, but feel free to suggest it! See [PEP 1](https://www.python.org/dev/peps/pep-0001/) for how. – Sneftel Dec 14 '17 at 15:30
  • @alec_djinn: "they should have defined a proper decimal type like in Viper for example" <- I'm curious. I'm looking at the viper source on GitHub, and I don't see anywhere that the arithmetic operations (e.g., multiplication, division) for the decimal type are defined. Can you point me to the right place? – Mark Dickinson Dec 14 '17 at 18:09
  • @Mark Dickinson in types.py combine_units() I believe. – alec_djinn Dec 14 '17 at 18:53
  • Also, in parser/parser_utils.py get_number_as_fraction() – alec_djinn Dec 14 '17 at 19:16

1 Answers1

3

When you write Decimal(0.3) you're using a floating point value 0.3 which is subject to precision errors.

>>> Decimal(0.3)
Decimal('0.299999999999999988897769753748434595763683319091796875')

Use strings to avoid floating point entirely.

>>> Decimal('0.3') + Decimal('0.3') + Decimal('0.3') + Decimal('0.1')
Decimal('1.0')
John Kugelman
  • 349,597
  • 67
  • 533
  • 578
  • Why `Decimal(1)` would be different from `Decimal('1.0')` ? Isn't confusing? – alec_djinn Dec 14 '17 at 13:14
  • Isn't there any other way to accomplish that without converting all numbers in strings? This solution is definitely not pythonic and unbelievably ugly. – alec_djinn Dec 14 '17 at 13:20
  • @alec_djinn: `Decimal(1)` will have the same value as `Decimal('1.0')`, because binary floating-point does represent 1 exactly. `Decimal(0.3)` will not have the same value as `Decimal('0.3')` because binary floating-pont does not represent .3 exactly. So you could omit the apostrophes for `Decimal(1)`. But then you need to be clear in your code about when you can and cannot omit apostrophes; it is easy to forget and make a mistake. That especially so for another engineer who changes your code later and does not know why you used apostrophes in some places and not others. – Eric Postpischil Dec 14 '17 at 13:28