-2

I'm going to extract a feature from pictures.I first define a tensor data_feature_map, and then use torch.cat to stack the features of one picture.

My code is :

data_feature_map = torch.ones(1,2048)
for i, data in enumerate(train_loader, 0):
    img, _ = data
    img.requires_grad_=False
    if torch.cuda.is_available():
        img = img.cuda()
    out = model(img)
    # out.shape = [1,2048]
    out = out.view(1,-1).cpu()
    data_feature_map = torch.cat((data_feature_map, out), 0)

but when i run it, it will show the error "RuntimeError: CUDA out of memory."

please tell me why this error occurs.Thank you very much.

Oliver.W
  • 1
  • 1
  • This error occurs likely because your GPU memory cannot hold the elements that you are trying to store on it. Also, at which point si the error occurring? Because seemingly you are bringing back the elements from the GPU, so I'd assume the error is happening inside the `model(img)` bit? Please include a full error message and read up on duplicate issues. – dennlinger Jun 05 '20 at 07:25
  • This question seems similar to this https://stackoverflow.com/questions/54374935/how-to-fix-this-strange-error-runtimeerror-cuda-error-out-of-memory/54376403 In addition i recommend that you have a look to the official PyTorch documentation https://pytorch.org/docs/stable/notes/faq.html – Ilyes KAANICH Jun 05 '20 at 07:25

1 Answers1

1

Since your GPU is running out of memory, you can try few things:

1.) Reduce your batch size

2.) Reduce your network size

Aniket Thomas
  • 343
  • 2
  • 9