46

I am training PyTorch deep learning models on a Jupyter-Lab notebook, using CUDA on a Tesla K80 GPU to train. While doing training iterations, the 12 GB of GPU memory are used. I finish training by saving the model checkpoint, but want to continue using the notebook for further analysis (analyze intermediate results, etc.).

However, these 12 GB continue being occupied (as seen from nvtop) after finishing training. I would like to free up this memory so that I can use it for other notebooks.

My solution so far is to restart this notebook's kernel, but that is not solving my issue because I can't continue using the same notebook and its respective output computed so far.

talonmies
  • 70,661
  • 34
  • 192
  • 269
Glyph
  • 683
  • 1
  • 7
  • 8

6 Answers6

31

The answers so far are correct for the Cuda side of things, but there's also an issue on the ipython side of things.

When you have an error in a notebook environment, the ipython shell stores the traceback of the exception so you can access the error state with %debug. The issue is that this requires holding all variables that caused the error to be held in memory, and they aren't reclaimed by methods like gc.collect(). Basically all your variables get stuck and the memory is leaked.

Usually, causing a new exception will free up the state of the old exception. So trying something like 1/0 may help. However things can get weird with Cuda variables and sometimes there's no way to clear your GPU memory without restarting the kernel.

For more detail see these references:

https://github.com/ipython/ipython/pull/11572

How to save traceback / sys.exc_info() values in a variable?

Karl
  • 961
  • 6
  • 10
21

If you just set object that uses a lot of memory to None like this:

obj = None

And after that you call

gc.collect() # Python thing

This is how you may avoid restarting the notebook.


If you still would like to see it clear from Nvidea smi or nvtop you may run:

torch.cuda.empty_cache() # PyTorch thing

to empty the PyTorch cache.

prosti
  • 42,291
  • 14
  • 186
  • 151
  • 6
    I tried `model = None` and `gc.collect()` but it didn't clear any GPU memory – Glyph Sep 18 '19 at 15:38
  • I usually use nvtop for checking GPU memory. Is that a good way to do it? – Glyph Sep 18 '19 at 17:33
  • gc.collect is telling Python to do garbage collection, if you use nvidia tools you won't see it clear because PyTorch still has allocated cache, but it makes it available. – prosti Sep 18 '19 at 18:06
  • 2
    Yeah, `torch.cuda.empty_cache()` may help you see it clear. – prosti Sep 18 '19 at 18:08
  • 2
    it worked for me, in the same order. 1.- model = None, 2.- gc.collect(), 3.- torch.cuda.empty_cache() – Oscar Rangel Jun 05 '22 at 16:45
17
with torch.no_grad():
    torch.cuda.empty_cache()
Maunish Dave
  • 359
  • 4
  • 9
1

If you have a variable called model, you can try to free up the memory it is taking up on the GPU (assuming it is on the GPU) by first freeing references to the memory being used with del model and then calling torch.cuda.empty_cache().

1

Apparently you can't clear the GPU memory via a command once the data has been sent to the device. The reference is here in the Pytorch github issues BUT the following seems to work for me.

Context: I have pytorch running in Jupyter Lab in a Docker container and accessing two GPU's [0,1]. Two notebooks are running. The first is on a long job while the second I use for small tests. When I started doing this, repeated tests seemed to progressively fill the GPU memory until it maxed out. I tried all the suggestions: del, gpu cache clear, etc. Nothing worked until the following.

To clear the second GPU I first installed numba ("pip install numba") and then the following code:

from numba import cuda
 
cuda.select_device(1) # choosing second GPU 
cuda.close()

Note that I don't actually use numba for anything except clearing the GPU memory. Also I have selected the second GPU because my first is being used by another notebook so you can put the index of whatever GPU is required. Finally, while this doesn't kill the kernel in a Jupyter session, it does kill the tf session so you can't use this intermittently during a run to free up memory.

MikeB2019x
  • 823
  • 8
  • 23
  • I have a similar issue in Jupyter and your solution actually releases the memory, but after closing the device I cannot access it anymore `RuntimeError: CUDA error: invalid argument` – Aray Karjauv Aug 03 '23 at 14:51
  • @ArayKarjauv those are hard to diagnose. I used this nice step-by-step advice [here](https://discuss.pytorch.org/t/torch-prod-produces-runtimeerror-cuda-driver-error-invalid-argument/179054/5) for some problems I once had but I've never had what you're describing. – MikeB2019x Aug 03 '23 at 16:58
-1

Never worked with PyTorch myself, but Google has several results which all basically say the same.. torch.cuda.empty_cache()

https://forums.fast.ai/t/clearing-gpu-memory-pytorch/14637

https://discuss.pytorch.org/t/how-can-we-release-gpu-memory-cache/14530

How to clear Cuda memory in PyTorch

iScripters
  • 403
  • 3
  • 13
  • 8
    `torch.cuda.empty_cache()` cleared the most of the used memory but I still have 2.7GB being used. It might be the memory being occupied by the model but I don't know how clear it. I tried `model = None` and `gc.collect()` from the other answer and it didn't work. – Glyph Sep 18 '19 at 15:40