I'm simply loading a CSV data inside jupyter-lab as follows:
data = pd.read_csv('data_simple.csv')
The file is around 300 MB. So when I load it, the memory usage increases significantly; let's say 500 MB. That's okay.
But when I run the exact same cell again, memory usage increases as much as the first time. And it keeps going as I run the same cell.
Why this happens? I'm loading it into the same variable: data
. Shouldn't it just free the old data and re-assign it? Where the old data goes if it just keeps all the data in the memory? I have tried to Google it but couldn't find anything except this. Thanks.