0

I'm having problem running a program with cuda, the errror says:

"raise RuntimeError('Attempting to deserialize object on a CUDA ' RuntimeError: Attempting to deserialize object on a CUDA device but torch.cuda.is_available() is False. If you are running on a CPU-only machine, please use torch.load with map_location=torch.device('cpu') to map your storages to the CPU."

I have a GPU (GeForce RTX 3050 Ti Mobile), I tried to follow all the guide, I reinstalled cuda toolkit:

nvcc --version

nvcc: NVIDIA (R) Cuda compiler driver
Copyright (c) 2005-2021 NVIDIA Corporation
Built on Thu_Nov_18_09:45:30_PST_2021
Cuda compilation tools, release 11.5, V11.5.119
Build cuda_11.5.r11.5/compiler.30672275_0

I tried the common solve of this error that i found on other questions on stackoverflow, like put the "map_location=torch.device('cpu')", that is working, but i would like to understand why cuda does not find my GPU and if there is a way to solve this.

This is the code part where i select the GPU for cuda:

# Set device
device = torch.device("cuda:0" if torch.cuda.is_available() else "cpu")
#device = torch.device("cpu")

Thank you for the help.

talonmies
  • 70,661
  • 34
  • 192
  • 269
whitek
  • 1
  • 1
  • You may find this answer helpful [Why `torch.cuda.is_available()` returns False even after installing pytorch with cuda?](https://stackoverflow.com/a/61034368/2790047) – jodag Apr 26 '23 at 15:13

0 Answers0