The difference in the number of digits after the decimal point is the difference between str(f)
and repr(f)
.
By default, the values are converted to string (before displaying in the ipython
console), using repr()
:
In [1]: class C:
...: def __str__(self):
...: return 'str'
...: def __repr__(self):
...: return 'repr'
...:
In [2]: C()
Out[2]: repr
In [3]: str(C())
Out[3]: 'str'
btw, I can't reproduce your output on Python 3.4:
In [4]: 1445007755.321532
Out[4]: 1445007755.321532
In [5]: str(1445007755.321532)
Out[5]: '1445007755.321532'
In [6]: 1445007755.321532 .__str__()
Out[6]: '1445007755.321532'
But I can reproduce it on Python 2:
In [1]: 1445007755.321532
Out[1]: 1445007755.321532
In [2]: str(1445007755.321532)
Out[2]: '1445007755.32'
In [3]: 1445007755.321532 .__str__()
Out[3]: '1445007755.32'
Note: float()
does NOT restore the precision here:
In [4]: float('1445007755.32')
Out[4]: 1445007755.32
In [5]: float(1445007755.32)
Out[5]: 1445007755.32
float(a)
shows more digits in your question because a
is already a float (it is probably a noop operation because floats are immutable):
In [6]: a = 1445007755.321532
In [7]: a is float(a)
Out[7]: True
i.e., float(a)
may return the exact same object in this case.