I've seen many discussions here about rounding floating values in python or similar topics, but I have a related problem that I want a good solution for.
Context:
I use the netCDF4 python library to extract data from NetCDF files. My organization keeps a precision attribute on variables within these files.
Example: TS:data_precision = 0.01f ;
I collect these precision attributes using the library like this:
d = netCDF4.Dataset(path) # assume path is the file or url link
precisions = {}
for v in d.variables:
try:
precisions[v] = d.variables[v].__getattribute__('data_precision')
except AttributeError:
pass
return precisions
When I retrieve these precision values from a dataset, in python they end up showing up like:
{u'lat': 9.9999997e-05, u'lon': 9.9999997e-05, u'TS': 0.0099999998, u'SSPS': 0.0099999998}
But, what I really want is:
{u'lat': 0.0001, u'lon': 0.0001, u'TS': 0.01, u'SSPS': 0.01}
Essentially I need a way in python to intelligently round these values to their most appropriate decimal place. I am sure I can come up with a really ugly method to do this, but I want to know if there is already a 'nice' solution to this problem.
For my use case, I suppose I can take advantage of the fact that since these values are all 'data_precision' values, I can just count the zero's from the decimal place, and then round to the last 0. (I'm making the assumption that 0 < n < 1
). With these assumptions, this would be my solution:
#/usr/bin/python
def intelli_round(n):
def get_decimal_place(n):
count = 0
while n < 1:
n *= 10
count += 1
return count
return round(n, get_decimal_place(n))
examples = [0.0099999, 0.00000999, 0.99999]
for e in examples:
print e, intelli_round(e)
.
0.0099999 0.01
9.99e-06 1e-05
0.99999 1.0
Does this seem appropriate? It seems to work under the constraints, but I'm curious to see alternatives.