93

I successfully trained the network but got this error during validation:

RuntimeError: CUDA error: out of memory

Zeitounator
  • 38,476
  • 7
  • 53
  • 66
xiaoding chen
  • 1,031
  • 1
  • 6
  • 3
  • 1
    How do you eventually fix the bug then? Do you reduce the batch size? – guanh01 Oct 11 '20 at 18:58
  • [@xiaoding](https://stackoverflow.com/users/10912236/xiaoding-chen), could you tell us please, what was the solution? –  Feb 09 '21 at 14:50
  • [@Lauraishere](https://stackoverflow.com/users/3845590/lauraishere), they commented below that they reduced the batch size and it did not work. Same for me also. Did you solve your problem, and if yes, could you please share? –  Feb 09 '21 at 14:56
  • If the model is used for validation, you can try using 'torch.no_grad()'. – Abhibha Gupta Jun 05 '21 at 15:05
  • Also, [Pytorch FAQ](https://pytorch.org/docs/stable/notes/faq.html) provides good insight on why this problem occurs and provides some solutions for this problem. – Amir Pourmand Jun 09 '22 at 07:58

15 Answers15

44

The error occurs because you ran out of memory on your GPU.

One way to solve it is to reduce the batch size until your code runs without this error.

Mateen Ulhaq
  • 24,552
  • 19
  • 101
  • 135
K. Khanda
  • 550
  • 4
  • 11
  • 2
    I tried it, I reduce the batch size to 8,but it also has the same error. – xiaoding chen Jan 27 '19 at 13:53
  • 9
    The amount of data in the training set is much larger than the verification set. Why is there no error in training, and there is time for validation? – xiaoding chen Jan 27 '19 at 13:55
  • Another approach which helped me was this: I ran this command in terminal `sudo rm -rf ~/.nv` and after rebooted my laptop. – K. Khanda Jan 27 '19 at 14:49
  • Also maybe tensors, which were used during the training are still active and then you are creating even more during the validation. – K. Khanda Jan 27 '19 at 14:51
  • 1
    You can check this issue here https://github.com/tensorflow/tensorflow/issues/19731 – K. Khanda Jan 27 '19 at 16:43
  • However, in your case it seems that it is better to use PyTorch. It supports dynamic computation graphs, which means that graphs are created on the go. Also PyTorch frees graphs after each iterations and it may resolve your current problem with memory leak in TensorFlow. – K. Khanda Jan 27 '19 at 16:45
  • I had batch size 2 and it is still was crashing. Only worked with barch size = 1. That might be the case for NLP model which are memory hungry. This was my problem but I have to say I have GPU from 2014 with only 4GB of memory. – Wojciech Jakubas Feb 20 '22 at 09:59
42

The best way is to find the process engaging gpu memory and kill it:

find the PID of python process from:

nvidia-smi

copy the PID and kill it by:

sudo kill -9 pid
Milad shiri
  • 812
  • 7
  • 5
  • 7
    what other programs could be taking up a lot of GPU memory other than something obvious like a game? – IntegrateThis Dec 10 '20 at 08:11
  • For others: If you stop a program mid-execution using Jupyter it can continue to hog GPU memory. This answer makes it clear that the only way to get around this issue in this case is to restart the kernel. – krc Jan 18 '23 at 01:28
41

1.. When you only perform validation not training,
you don't need to calculate gradients for forward and backward phase.
In that situation, your code can be located under

with torch.no_grad():
    ...
    net=Net()
    pred_for_validation=net(input)
    ...

Above code doesn't use GPU memory

2.. If you use += operator in your code,
it can accumulate gradient continuously in your gradient graph.
In that case, you need to use float() like following site
https://pytorch.org/docs/stable/notes/faq.html#my-model-reports-cuda-runtime-error-2-out-of-memory

Even if docs guides with float(), in case of me, item() also worked like

entire_loss=0.0
for i in range(100):
    one_loss=loss_function(prediction,label)
    entire_loss+=one_loss.item()

3.. If you use for loop in training code,
data can be sustained until entire for loop ends.
So, in that case, you can explicitly delete variables after performing optimizer.step()

for one_epoch in range(100):
    ...
    optimizer.step()
    del intermediate_variable1,intermediate_variable2,...
YoungMin Park
  • 1,101
  • 1
  • 10
  • 18
32

I had the same issue and this code worked for me :

import gc

gc.collect()

torch.cuda.empty_cache()
Syscall
  • 19,327
  • 10
  • 37
  • 52
behnaz.sheikhi
  • 624
  • 8
  • 6
10

It might be for a number of reasons that I try to report in the following list:

  1. Modules parameters: check the number of dimensions for your modules. Linear layers that transform a big input tensor (e.g., size 1000) in another big output tensor (e.g., size 1000) will require a matrix whose size is (1000, 1000).
  2. RNN decoder maximum steps: if you're using an RNN decoder in your architecture, avoid looping for a big number of steps. Usually, you fix a given number of decoding steps that is reasonable for your dataset.
  3. Tensors usage: minimise the number of tensors that you create. The garbage collector won't release them until they go out of scope.
  4. Batch size: incrementally increase your batch size until you go out of memory. It's a common trick that even famous library implement (see the biggest_batch_first description for the BucketIterator in AllenNLP.

In addition, I would recommend you to have a look to the official PyTorch documentation: https://pytorch.org/docs/stable/notes/faq.html

Alessandro Suglia
  • 1,907
  • 1
  • 16
  • 23
  • 2
    The same network is used for training and validation. Why is there no error in training, and it happens when validation? – xiaoding chen Jan 27 '19 at 14:05
8

I am a Pytorch user. In my case, the cause for this error message was actually not due to GPU memory, but due to the version mismatch between Pytorch and CUDA.

Check whether the cause is really due to your GPU memory, by a code below.

import torch
foo = torch.tensor([1,2,3])
foo = foo.to('cuda')

If an error still occurs for the above code, it will be better to re-install your Pytorch according to your CUDA version. (In my case, this solved the problem.) Pytorch install link

A similar case will happen also for Tensorflow/Keras.

Toru Kikuchi
  • 332
  • 4
  • 4
  • what does 're-install your Pytorch according to your CUDA version' mean? How do you correspond versions of cuda and pytorch? let's say I'm installing the nightly version, what cuda version is appropriate in your definition? – Blade Dec 07 '21 at 19:49
  • 1
    @Blade, the answer to your question won't be static. But [this page](https://pytorch.org/get-started/locally/) suggests that the current nightly build is built against CUDA 10.2 (but one can install a CUDA 11.3 version etc.). Moreover, the [previous versions](https://pytorch.org/get-started/previous-versions/) page also has instructions on installing for specific versions of CUDA. – damagedgods Dec 09 '21 at 01:13
4

If you are getting this error in Google Colab use this code:

import torch
torch.cuda.empty_cache()
  • can we use this code in our local machines too? I keep getting this error as well, in a much detailed fashion : RuntimeError: CUDA out of memory. Tried to allocate 20.00 MiB (GPU 0; 3.95 GiB total capacity; 2.80 GiB already allocated; 39.31 MiB free; 2.89 GiB reserved in total by PyTorch) @ThembaTman – Lakshmi Narayanan Jul 06 '21 at 12:08
  • Yeah, you can.empty_cache() doesn’t increase the amount of GPU memory available for PyTorch.However, in some instances, it can help reduce GPU memory fragmentation. – Themba Tman Jul 07 '21 at 13:20
2

Not sure if this'll help you or not, but this is what solved the issue for me:

export PYTORCH_CUDA_ALLOC_CONF=max_split_size_mb:128

Nothing else in this thread helped.

WaXxX333
  • 388
  • 1
  • 2
  • 11
1

In my experience, this is not a typical CUDA OOM Error caused by PyTorch trying to allocate more memory on the GPU than you currently have.

The giveaway is the distinct lack of the following text in the error message.

Tried to allocate xxx GiB (GPU Y; XXX GiB total capacity; yyy MiB already allocated; zzz GiB free; aaa MiB reserved in total by PyTorch)

In my experience, this is an Nvidia driver issue. A reboot has always solved the issue for me, but there are times when a reboot is not possible.

One alternative to rebooting is to kill all Nvidia processes and reload the drivers manually. I always refer to the unaccepted answer of this question written by Comzyh when performing the driver cycle. Hope this helps anyone trapped in this situation.

0

If someone arrives here because of fast.ai, the batch size of a loader such as ImageDataLoaders can be controlled via bs=N where N is the size of the batch.

My dedicated GPU is limited to 2GB of memory, using bs=8 in the following example worked in my situation:

from fastai.vision.all import *
path = untar_data(URLs.PETS)/'images'

def is_cat(x): return x[0].isupper()
dls = ImageDataLoaders.from_name_func(
    path, get_image_files(path), valid_pct=0.2, seed=42,
    label_func=is_cat, item_tfms=Resize(244), num_workers=0, bs=)

learn = cnn_learner(dls, resnet34, metrics=error_rate)
learn.fine_tune(1)
dgellow
  • 692
  • 1
  • 11
  • 18
  • 1
    This is exactly where I was encountering this error - trying to execute the above jupyter cell for the book "Deep Learning for Coders with fastai and pytorch". However, at first, it didn't work. Even with num_workers=0 and bs=8, it ran out of memory. I tried using bs=4, I tried shutting down all other running apps, still out of memory. But then, I decided to reboot (always a good idea with Windows), and after that, it took a while, but ran successfully. In fact, thinking about it, I'd probably recommend rebooting first, then using just num_workers=0 (which is necessary under Windows). – John Deighan Nov 25 '20 at 14:02
0

Problem solved by the following code:

import os
os.environ['CUDA_VISIBLE_DEVICES']='2, 3'
ah bon
  • 9,293
  • 12
  • 65
  • 148
  • I guess this would [only work when you had multiple GPUs](https://stackoverflow.com/questions/39649102/how-do-i-select-which-gpu-to-run-a-job-on)? – Adam Burke Oct 19 '22 at 06:31
0

If you're running Keras/TF in Jupyter on a local server and another notebook is open which was accessing the GPU, you can also get this error. Just halt and close the other notebook(s). This can occur even if the other notebook isn't actively running anything.

This is distinct from PyTorch OOM errors, which typically refer to PyTorch's allocation of GPU RAM and are of the form

OutOfMemoryError: CUDA out of memory. Tried to allocate 734.00 MiB (GPU 0; 7.79 GiB total capacity; 5.20 GiB already allocated; 139.94 MiB free; 6.78 GiB reserved in total by PyTorch) If reserved memory is >> allocated memory try setting max_split_size_mb to avoid fragmentation.  See documentation for Memory Management and PYTORCH_CUDA_ALLOC_CONF

Because PyTorch manages a subset of GPU RAM for a given job, it can sometimes draw an OOM error even though there's sufficient available RAM in the GPU (just not enough in Torch's self-allocation)

These errors can be a bit obscure to troubleshoot, but generally three techniques can be helpful:

  1. at the head of your notebook, add these lines: import os os.environ["PYTORCH_CUDA_ALLOC_CONF"] = "max_split_size_mb:64"
  2. delete objects that are on the GPU as soon as you don't need them anymore
  3. reduce things like batch_size in training or testing scenarios

You can monitor GPU RAM simplistically with watch nvidia-smi

Every 2.0s: nvidia-smi                                                                     numbaCruncha123: Wed May 31 11:30:57 2023

Wed May 31 11:30:57 2023
+-----------------------------------------------------------------------------+
| NVIDIA-SMI 510.108.03   Driver Version: 510.108.03   CUDA Version: 11.6     |
|-------------------------------+----------------------+----------------------+
| GPU  Name        Persistence-M| Bus-Id        Disp.A | Volatile Uncorr. ECC |
| Fan  Temp  Perf  Pwr:Usage/Cap|         Memory-Usage | GPU-Util  Compute M. |
|                               |                      |               MIG M. |
|===============================+======================+======================|
|   0  NVIDIA GeForce ...  Off  | 00000000:26:00.0 Off |                  N/A |
| 37%   33C    P2    34W / 175W |   7915MiB /  8192MiB |      3%      Default |
|                               |                      |                  N/A |
+-------------------------------+----------------------+----------------------+

+-----------------------------------------------------------------------------+
| Processes:                                                                  |
|  GPU   GI   CI        PID   Type   Process name                  GPU Memory |
|        ID   ID                                                   Usage      |
|=============================================================================|
|    0   N/A  N/A      2905      C   ...user/z_Venv/NC/bin/python     1641MiB |
|    0   N/A  N/A     31511      C   ...user/z_Venv/NC/bin/python     6271MiB |
+-----------------------------------------------------------------------------+

This will tell you what's using RAM across the entire GPU.

Note: if you've got a notebook running but don't see anything here, it's possible you're running on the CPU.

James_SO
  • 1,169
  • 5
  • 11
0

Find out what other processes are also using the GPU and free up that space.

find the PID of python process by running:

nvidia-smi

and kill it using

sudo kill -9 pid
Aadesh
  • 403
  • 3
  • 13
0

I had this same error RuntimeError: CUDA error: out of memory

I was able to resolve this on a machine with 4 GPUs by first running nvidia-smi to learn that GPU 1 is already at full capacity by another user, causing the error as my script also tried to use the first GPU. I then ran export CUDA_VISIBLE_DEVICES=2,3,4 on the cli. My script now runs by looking only for GPUs 2,3,4 and ignoring 1.

In my case, my code actually doesn't need a GPU but was trying to use them, so I set export CUDA_VISIBLE_DEVICES="" and now it runs on CPU without attempting to use GPU.

Amanda
  • 202
  • 1
  • 3
  • 8
-3

I faced the same issue with my computer. All you have to do is customize your configuration file to match your computer's specifications. Turns out my computer takes image sizes below 600 X 600 and when I adjusted the same in the configuration file, the program ran smoothly.

Picture Describing my cfg file

Gino Mempin
  • 25,369
  • 29
  • 96
  • 135