I teached my neural nets and realized that even after torch.cuda.empty_cache() and gc.collect() my cuda-device memory is filled. In Colab Notebooks we can see the current variables in memory, but even I delete every variable and clean the garbage gpu-memory is busy. I heard it's because python garbage collector can't work on cuda-device. Please explain me, what should I do?
Asked
Active
Viewed 3,800 times
2
-
please see [enter link description here](https://stackoverflow.com/questions/55322434/how-to-clear-cuda-memory-in-pytorch) – jinyi wu Dec 01 '21 at 08:04