I am aware of the technical limitations when comparing floats, but consider the following example:
import pandas as pd
import numpy as np
df = pd.DataFrame({'col1': [1.12060000],
'col2': [1.12065000]})
df
Out[155]:
col1 col2
0 1.12060000 1.12065000
As you can see col2
and col1
are exactly 0.00005
apart. Now, I want to test that. I understand that this returns the wrong result because I am using decimals
(df.col2 - df.col1) < 0.00005
Out[156]:
0 True
dtype: bool
However, more puzzling to me are the following results
(100000*df.col2 - 100000*df.col1) < 5
Out[157]:
0 True
dtype: bool
while
(1000000*df.col2 - 1000000*df.col1) < 50
Out[158]:
0 False
dtype: bool
Why does the comparison to 5 fails and only the last one works? I thought using integers would solve the issues when comparing floats?
Thanks!