"CUDA out memory". I just want to know that is there any way using pytorch to not run out of CUDA memory without reducing any parameters like reducing batch size. btw, I'm working on a 6 GB NVIDIA RTX 3060.
Asked
Active
Viewed 198 times
1
-
1Does this answer your question? [How to avoid "CUDA out of memory" in PyTorch](https://stackoverflow.com/questions/59129812/how-to-avoid-cuda-out-of-memory-in-pytorch) – subspring Mar 01 '22 at 09:27
-
also check this : https://stackoverflow.com/questions/48473573/getting-cuda-out-of-memory – subspring Mar 01 '22 at 09:28
-
am i supposed to use the garbage collector after every epoch. Because the training runs fine on the first epoch but crashes with CUDA running out of memory error as soon as it starts the second epoch. – sharktooth Mar 01 '22 at 09:50