24

I am trying to run a simple pytorch sample code. It's works fine using CPU. But when using GPU, i get this error message:

Traceback (most recent call last):
  File "<stdin>", line 1, in <module>
  File "/usr/local/lib/python3.6/dist-packages/torch/nn/modules/module.py", line 889, in _call_impl
    result = self.forward(*input, **kwargs)
  File "/usr/local/lib/python3.6/dist-packages/torch/nn/modules/conv.py", line 263, in forward
    return self._conv_forward(input, self.weight, self.bias)
  File "/usr/local/lib/python3.6/dist-packages/torch/nn/modules/conv.py", line 260, in _conv_forward
    self.padding, self.dilation, self.groups)
RuntimeError: cuDNN error: CUDNN_STATUS_NOT_INITIALIZED

The code i am trying to run is the following:

import torch
from torch import nn
m = nn.Conv1d(16, 33, 3, stride=2)
m=m.to('cuda')
input = torch.randn(20, 16, 50)
input=input.to('cuda')
output = m(input)

I am running this code in a NVIDIA docker with CUDA version 10.2 and my GPU is a RTX 2070

talonmies
  • 70,661
  • 34
  • 192
  • 269
Eduardo H
  • 341
  • 1
  • 2
  • 3

8 Answers8

22

In my case it actually had nothing do with the PyTorch/CUDA/cuDNN version. PyTorch initializes cuDNN lazily whenever a convolution is executed for the first time. However, in my case there was not enough GPU memory left to initialize cuDNN because PyTorch itself already held the entire memory in its internal cache. One can release the cache manually with "torch.cuda.empty_cache()" right before the first convolution that is executed. A cleaner solution is to force cuDNN initialization at the beginning by doing a mock convolution:

def force_cudnn_initialization():
    s = 32
    dev = torch.device('cuda')
    torch.nn.functional.conv2d(torch.zeros(s, s, s, s, device=dev), torch.zeros(s, s, s, s, device=dev))

Calling the above function at the very beginning of the program solved the problem for me.

creiser
  • 341
  • 2
  • 5
21

There is some discussion regarding this here. I had the same issue but using cuda 11.1 resolved it for me.

This is the exact pip command

pip install torch==1.8.0+cu111 torchvision==0.9.0+cu111 torchaudio==0.8.0 -f https://download.pytorch.org/whl/torch_stable.html
saturn660
  • 313
  • 1
  • 5
7

I am also using Cuda 10.2. I had the exact same error when upgrading torch and torchvision to the latest version (torch-1.8.0 and torchvision-0.9.0). Which version are you using?

I guess this is not the best solution but by downgrading to torch-1.7.1 and torchvision-0.8.2 it works just fine.

Marie Duc
  • 71
  • 4
2

I had the same issue when I was training yolov7 with a chess dataset. By reducing batch size from 8 to 4, the issue was solved.

Flying Fox
  • 21
  • 1
  • Your answer could be improved with additional supporting information. Please [edit] to add further details, such as citations or documentation, so that others can confirm that your answer is correct. You can find more information on how to write good answers [in the help center](/help/how-to-answer). – Community Oct 29 '22 at 18:22
  • As noted, some additional information would be helpful. – craigb Oct 30 '22 at 20:53
1

In my cases this error occurred when trying to estimate loss. I used a mixed bce-dice loss. It turned out that my output was linear instead of sigmoid. I then used the sigmoid predictions as of bellow and worked fine.

output = torch.nn.Sigmoid()(output)
loss = criterion1(output, target)
Ioannis Nasios
  • 8,292
  • 4
  • 33
  • 55
1

In my case, I had an array indexing operation but the index was out of bounds. CUDA did not tell me that. I was using inference on a neural network. So I moved to CPU instead of the GPU. The logs were much more informative after that. For debugging if you see this error, switch to CPU first and you will know what to do.

1

In my problem i used to kill exisiting process in gpu.Use nvidia-smi to check what are the process are running.Use killall -9 python3(what process you want) to kill process.After freeup space then run the process.

0

Sometimes, if any error happens in the CUDA c++ code that is converted into .so file and used inside Python code, it could cause this problem, so check your C++ source code if you have any.

Tony.sy
  • 11
  • 2