When you convert a float
to Decimal
, the Decimal
will contain as accurate a representation of the binary number that it can. It's nice to be accurate, but it isn't always what you want. Since many decimal numbers can't be represented exactly in binary, the resulting Decimal
will be a little off - sometimes a little high, sometimes a little low.
>>> from decimal import Decimal
>>> for f in (0.1, 0.3, 1e25, 1e28, 1.0000000000001):
print Decimal(f)
0.1000000000000000055511151231257827021181583404541015625
0.299999999999999988897769753748434595763683319091796875
10000000000000000905969664
9999999999999999583119736832
1.000000000000099920072216264088638126850128173828125
Ideally we'd like the Decimal
to be rounded to the most likely decimal equivalent.
I tried converting to str
since a Decimal
created from a string will be exact. Unfortunately str
rounds a little too much.
>>> for f in (0.1, 0.3, 1e25, 1e28, 1.0000000000001):
print Decimal(str(f))
0.1
0.3
1E+25
1E+28
1.0
Is there a way of getting a nicely rounded Decimal
from a float?