0

I noticed that the math operation in python is not as precise as before, especially the one involves float numbers. I know it is due to the nature of binary number representation, and we can get through this problem by doing:

from decimal import Decimal
a = Decimal('0.1') + Decimal('0.2')

I can even do something further like:

def func(a, b, operator):
    a_ = Decimal('{}'.format(a))
    b_ = Decimal('{}'.format(b))
    return eval('float(a_ {} b_)'.format(operator))

func(0.1, 0.2, '+') # will return 0.3

However, I do not want to go this far. In fact, I was using python as calculator or a Matlab alternative all the time. Having to write a lot more stuff for a quick calculation is not convenient. The context setting for the decimal module also requires to write "Decimal" in front of the number.

This 5-year-old question focused on the script instead of a working inside an interpreter. I also tried the code example but it is not working as expected.

Is there a quick and dirty way to make python execute 0.1 + 0.2 have the same result of float(Decimal('0.1') + Decimal('0.2'))? It should be also applied to the other math operations like ** and equality comparison like ==.

ONLYA
  • 3
  • 6
  • 1
    What do you mean by "not as precise _as before_"? Before when? The fundamentals of Python's floating-point arithmetic haven't changed in decades. – Mark Dickinson Mar 03 '23 at 18:31
  • 1
    Matlab and Python produce the same value for `0.1+0.2`. Both produce exactly `0.3000000000000000444089209850062616169452667236328125`. Matlab hides the additional decimals but `fprintf('%.52f',0.1+0.2)` will show the full value. This is a limitation of fixed floating point precision used by basically all modern languages. What you're seeing is a difference of printed precision, not a difference of actual values in memory. – jodag Mar 03 '23 at 18:33
  • @MarkDickinson I believe that it might be as what jodag said: the previous version might hide the real result by elimnating additional decimals. I forgot the version, but it returned the real decimals after a specific version. Perhaps something wrong in my memory. – ONLYA Mar 03 '23 at 18:39
  • @MarkDickinson I believe that this question is not a duplicate because the answer provided works for a script only in my opinion. The float was not rewind as a string literal to the Decimal constructor. – ONLYA Mar 03 '23 at 18:44

1 Answers1

0

One approach is to create a new object type called D (for decimal).

from decimal import Decimal

class D:
  def __init__(self, d: float | int):
    self.d = Decimal(str(d))

  def __add__(self, other_d: D):
    return self.d + other_d.d

By overwriting the __add__ method on D, you can recreate the addition behaviour you want. In the same way, you can overwrite the magic method corresponding to each of the arithmetic operators. (e.g. overwrite __eq__ for ==, etc)

Once this is in place you can do something like:

a = D(0.1)
b = D(0.3)
c = a + b

print(c)
-> 0.4 (Decimal type)
Alan
  • 1,746
  • 7
  • 21
  • Note, the typing on the arguments (e.g. `d: float | int`, and `other_d: D`) are python version dependent. If they give errors, just remove them. – Alan Mar 03 '23 at 18:42
  • This doesn't seem any better than just using `D=Decimal` – jodag Mar 03 '23 at 18:46
  • Using the internal method within a new class is a good way to do. However, is there a way to change how python interpret float literals to Decimal class string input? As the result, after doing a series of things, I can just type `0.1+0.2`, which will return 0.3? – ONLYA Mar 03 '23 at 18:50
  • @ONLYA it's an interesting question, and a related one is answered here: https://stackoverflow.com/a/7880276/12705481. The gist comes down to abstract syntax trees; these are the data structures that store the written python code as a sequence of instructions for the compiler. You can interact with them via the `ast` standard library module. There may be a way to do what you want to do, though if there is it's very fundamental and tricky stuff (and certainly beyond my comprehension!). My solution, at the very least, provides a shorthand. – Alan Mar 03 '23 at 19:55
  • @jodag the advantage is the string conversion. If you do `Decimal(0.1)` you get the rounding errors. So you have to do `Decimal("0.1")` to avoid the rounding errors. This solution gets us closer to using "raw" floats. – Alan Mar 03 '23 at 19:58
  • @Alan Thanks. I did a little research on this but it appears to transform the AST for a script file instead of the interpreter. I can run this in the interpreter like `exec(compile(transformed, '', "exec"))` but I have to call it every time with the transformed version in the interpreter. Perhaps this is only viable for the script file but not the direct interpreter input. – ONLYA Mar 04 '23 at 00:02