Using Python 2.7 on OSX, the output is different from x
, wondering if there is a way to control Python double/float more precision?
x=0.123456789123456789
print x # output 0.123456789123
Update 1,
Weird output when using Decimal,
x=0.123456789123456789
y=decimal.Decimal(x)
print x," and ", y # output 0.123456789123 and 0.1234567891234567837965840908509562723338603973388671875
regards, Lin