I am aware of Python having floating point errors when using the normal types. That is why I am using Pandas instead.
I suddenly started having some issues with data I input (not calculation) and cannot explain the following behavior:
In [600]: df = pd.DataFrame([[0.05], [0.05], [0.05], [0.05]], columns = ['a'])
In [601]: df.dtypes
Out[601]:
a float64
dtype: object
In [602]: df['a'].sum()
Out[602]: 0.20000000000000001
In [603]: df['a'].round(2).sum()
Out[603]: 0.20000000000000001
In [604]: (df['a'] * 1000000).round(0).sum()
Out[604]: 200000.0
In [605]: (df['a'] * 1000000).round(0).sum() / 1000000
Out[605]: 0.20000000000000001
Hopefully somebody can help me either fix this or figure out how to correctly sum 0.2 (or I don't mind if the result is 20 or 2000, but as you can see when I then divide I get to the same point where the sum is incorrect!).
(to run my code remember to do import pandas as pd
)