I'm trying to use aitextgen to finetune 774M gpt 2 on a dataset. unfortunately, no matter what i do, training fails because there are only 80 mb of vram available. how can i clear the vram without restarting the runtime and maybe prevent the vram from being full?
Asked
Active
Viewed 7,584 times
2 Answers
9
Another solution can be using these code snippets.
1.
!pip install numba
- Then:
from numba import cuda
# all of your code and execution
cuda.select_device(0)
cuda.close()
Your problem is discussed in Tensorflow official github. https://github.com/tensorflow/tensorflow/issues/36465
Update: @alchemy reported this to be unrecoverable in terms of turning on. You can try below code.
device = cuda.get_current_device()
device.reset()

Joyanta J. Mondal
- 888
- 1
- 8
- 20
-
well i hope it will be solved soon then, i'm tired of having to finetune 124M gpt 2 only – Blazeolmo 343 Mar 07 '22 at 15:38
-
3this is unrecoverable.. use `device = cuda.get_current_device(); device.reset()` https://stackoverflow.com/a/73517694/4240654 – alchemy Oct 09 '22 at 21:49
0
- Run the command
!nvidia-smi
inside a notebook block. - Look for the process id for the GPU that is unnecessary for you to remove for cleaning up vram. Then run the command
!kill process_id
It should help you.

Joyanta J. Mondal
- 888
- 1
- 8
- 20
-
-
Just to recheck, is the setting "GPU" turned on from "Notebook settings" option? Because it should not be happening iff the gpu is not being on and used. – Joyanta J. Mondal Mar 07 '22 at 14:20
-
No no. Don't get me wrong. And don't get offended. I am trying to help you. I already provided another solution below. You can check that. – Joyanta J. Mondal Mar 07 '22 at 14:29