I'm using the general formatting option to print floats as strings, but i find the behaviour somewhat unusual. For example, when specifying a precision (number of digits following the decimal point): For float format:
In [1]: '%f' % 123.456
Out[1]: '123.456000'
In [2]: '%.1f' % 123.456
Out[2]: '123.5'
works just as expected. Now, compare this to:
In [3]: '%g' % 123.456
Out[3]: '123.456'
In [4]: '%.1g' % 123.456
Out[4]: '1e+02'
Or even worse:
In [6]: '%5g' % 123.456
Out[6]: '123.456'
In [7]: '%.5g' % 123.456
Out[7]: '123.46'
It seems to be taking the "precision" value to mean "width". Is this a bug, or is it expected behaviour?
Ps. What I want is to print floating point numbers (up to a given precision) with the minimum needed characters. For example: 0.301238 with precision 1 prints as 0.3; 100.0003 with precision 1 prints as 100;