I would like to return the rounded values of num
, where the number of decimal places passed to round()
are the number of decimal places of the floats
in [scis]
.
scis = [5e-05, 5e-06, 5e-07, 5e-08]
num = 0.0123456789
returns:
0.01235
0.012346
0.0123457
0.01234568
In order for something like this to work, I need to derive the number of decimal places from each item in sci
to pass to round
.
I wasn't able to come up with an answer reviewing Why are floating point numbers inaccurate? or Why can't decimal numbers be represented exactly in binary?, and for reasons described in Is floating point math broken?, using a method like this produces too many decimal places:
import decimal
scis = [5e-05, 5e-06, 5e-07, 5e-08]
for sci in scis:
d = decimal.Decimal(sci)
dp = abs(d.as_tuple().exponent)
print(dp)
67
70
73
74
Am I relegated to having to parse the string definition of scientific notation in order to derive the number of decimal places, or is there a less naive way to approach this?
scis = [5e-05, 5e-06, 5e-07, 5e-08]
num = 0.0123456789
for sci in scis:
place = int(str(sci).split('-')[-1:][-1])
print(round(num, places))
0.01235
0.012346
0.0123457
0.01234568