My RAM is around 15GB.
I have a 30GB+ data which I read by chunk
df_user_logs = pd.read_csv('../input/user_logs.csv', chunksize=1000000)
and then for each chunk I did memory reduction on it like this
list_of_dfs = []
for chunk in df_user_logs:
change_datatype(chunk)
change_datatype_float(chunk)
list_of_dfs.append(chunk)
I did this according to answers and comments given by Link 1 and Link 2
somehow MemoryError occured when I try to concat the list_of_dfs
df_user_logs = pd.concat(list_of_dfs)
Any solution would be greatly appreciated.