I have a big problem filtering my data. I've read a lot here on stackoverflow and ion other pages and tutorials, but I could not solve my specific problem... The first part of my code, where I load my data into python looks as follow:
import numpy as np
import pandas as pd
import matplotlib.pyplot as plt
from arch import arch_model
spotmarket = pd.read_excel("./data/external/Spotmarket_dhp.xlsx", index=True)
r = spotmarket['Price'].pct_change().dropna()
returns = 100 * r
df = pd.DataFrame(returns)
The excel table has 43.000 values in one column and includes the hourly prices. I use this data to calculate the percentage change from hour to hour and the problem is, that there are sometimes big changes between 1000 to 40000%. The dataframe looks as follow:
df
Out[12]:
Price
1 20.608229
2 -2.046870
3 6.147789
4 16.519258
...
43827 -16.079874
43828 -0.438322
43829 -40.314465
43830 -100.105374
43831 700.000000
43832 -62.500000
43833 -40400.000000
43834 1.240695
43835 52.124183
43836 12.996778
43837 -17.157795
43838 -30.349971
43839 6.177924
43840 45.073701
43841 76.470588
43842 2.363636
43843 -2.161042
43844 -6.444781
43845 -14.877102
43846 6.762918
43847 -38.790036
[43847 rows x 1 columns]
I wanna exclude this outliers. I've tried different ways like calculating the mean
and the std
and exclude all values which are + and - three times the std
away from the mean
. It works for a small part of the data, but for the complete data, the mean and std are both NaN
. Has someone an idea, how I can filter my dataframe?