Everybody knows, or at least, every programmer should know, that using the float
type could lead to precision errors. However, in some cases, an exact solution would be great and there are cases where comparing using an epsilon value is not enough. Anyway, that's not really the point.
I knew about the Decimal
type in Python but never tried to use it. It states that "Decimal numbers can be represented exactly" and I thought that it meant a clever implementation that allows to represent any real number. My first try was:
>>> from decimal import Decimal
>>> d = Decimal(1) / Decimal(3)
>>> d3 = d * Decimal(3)
>>> d3 < Decimal(1)
True
Quite disappointed, I went back to the documentation and kept reading:
The context for arithmetic is an environment specifying precision [...]
OK, so there is actually a precision. And the classic issues can be reproduced:
>>> dd = d * 10**20
>>> dd
Decimal('33333333333333333333.33333333')
>>> for i in range(10000):
... dd += 1 / Decimal(10**10)
>>> dd
Decimal('33333333333333333333.33333333')
So, my question is: is there a way to have a Decimal type with an infinite precision? If not, what's the more elegant way of comparing 2 decimal numbers (e.g. d3 < 1 should return False if the delta is less than the precision).
Currently, when I only do divisions and multiplications, I use the Fraction
type:
>>> from fractions import Fraction
>>> f = Fraction(1) / Fraction(3)
>>> f
Fraction(1, 3)
>>> f * 3 < 1
False
>>> f * 3 == 1
True
Is it the best approach? What could be the other options?