I noticed that the math operation in python is not as precise as before, especially the one involves float numbers. I know it is due to the nature of binary number representation, and we can get through this problem by doing:
from decimal import Decimal
a = Decimal('0.1') + Decimal('0.2')
I can even do something further like:
def func(a, b, operator):
a_ = Decimal('{}'.format(a))
b_ = Decimal('{}'.format(b))
return eval('float(a_ {} b_)'.format(operator))
func(0.1, 0.2, '+') # will return 0.3
However, I do not want to go this far. In fact, I was using python as calculator or a Matlab alternative all the time. Having to write a lot more stuff for a quick calculation is not convenient. The context setting for the decimal module also requires to write "Decimal" in front of the number.
This 5-year-old question focused on the script instead of a working inside an interpreter. I also tried the code example but it is not working as expected.
Is there a quick and dirty way to make python execute 0.1 + 0.2
have the same result of float(Decimal('0.1') + Decimal('0.2'))
?
It should be also applied to the other math operations like **
and equality comparison like ==
.