5

I'm trying to use aitextgen to finetune 774M gpt 2 on a dataset. unfortunately, no matter what i do, training fails because there are only 80 mb of vram available. how can i clear the vram without restarting the runtime and maybe prevent the vram from being full?

Blazeolmo 343
  • 51
  • 1
  • 2

2 Answers2

9

Another solution can be using these code snippets.

1.

!pip install numba
  1. Then:
from numba import cuda
# all of your code and execution
cuda.select_device(0)
cuda.close()

Your problem is discussed in Tensorflow official github. https://github.com/tensorflow/tensorflow/issues/36465

Update: @alchemy reported this to be unrecoverable in terms of turning on. You can try below code.

device = cuda.get_current_device() 
device.reset()
Joyanta J. Mondal
  • 888
  • 1
  • 8
  • 20
  • well i hope it will be solved soon then, i'm tired of having to finetune 124M gpt 2 only – Blazeolmo 343 Mar 07 '22 at 15:38
  • 3
    this is unrecoverable.. use `device = cuda.get_current_device(); device.reset()` https://stackoverflow.com/a/73517694/4240654 – alchemy Oct 09 '22 at 21:49
0
  1. Run the command !nvidia-smi inside a notebook block.
  2. Look for the process id for the GPU that is unnecessary for you to remove for cleaning up vram. Then run the command !kill process_id

It should help you.

Joyanta J. Mondal
  • 888
  • 1
  • 8
  • 20