0

I'm trying to use cuda on WSL2 ubuntu. when I run nvidia-smi, it's good:

$ nvidia-smi

Wed Aug  9 14:11:02 2023       
+---------------------------------------------------------------------------------------+
| NVIDIA-SMI 535.98.01              Driver Version: 536.99       CUDA Version: 12.2     |
|-----------------------------------------+----------------------+----------------------+
| GPU  Name                 Persistence-M | Bus-Id        Disp.A | Volatile Uncorr. ECC |
| Fan  Temp   Perf          Pwr:Usage/Cap |         Memory-Usage | GPU-Util  Compute M. |
|                                         |                      |               MIG M. |
|=========================================+======================+======================|
|   0  NVIDIA GeForce RTX 2060        On  | 00000000:01:00.0  On |                  N/A |
| N/A   53C    P8              12W /  80W |    872MiB /  6144MiB |      7%      Default |
|                                         |                      |                  N/A |
+-----------------------------------------+----------------------+----------------------+
                                                                                         
+---------------------------------------------------------------------------------------+
| Processes:                                                                            |
|  GPU   GI   CI        PID   Type   Process name                            GPU Memory |
|        ID   ID                                                             Usage      |
|=======================================================================================|
|    0   N/A  N/A       256      G   /Xwayland                                 N/A      |
+---------------------------------------------------------------------------------------+

In python I run

>>> torch.cuda.device_count()
1

but cuda is not available:

>>> torch.cuda.is_available()
False
talonmies
  • 70,661
  • 34
  • 192
  • 269
Huang
  • 1
  • 3
  • This apparently happens a lot, for me it was because wrong/old version of cuda was installed. Try checking that, also check https://discuss.pytorch.org/t/torch-cuda-is-available-returns-false-even-cuda-is-installed/159016/11 – Suraj Shourie Aug 09 '23 at 18:18
  • Prior to torch 2.0.0 when you ran `pip install torch` it only installed the CPU version. If you wanted to install with GPU you needed to know which CUDA version you have installed and run the correct command - see https://pytorch.org/get-started/previous-versions/ With torch>=2.0.0 its a lot easier, cuda 1.17 is the default and it's installed as part of the cuda install (there's now nvidia python packages). If you want cuda 1.18, ROCm or CPU see https://pytorch.org/get-started/locally/ If you can I'd upgrade to the latest - i.e. `pip install -U torch` – David Waterworth Aug 10 '23 at 03:36

1 Answers1

0

You should check the following compatibilities:

  1. CUDNN compatibility on your GPU
  2. CUDNN compatibility on Pytorch version
  3. GPU Driver version on your GPU

CUDA is not available if one of those is not compatible.

  • Following [here](https://stackoverflow.com/questions/76867532/which-version-of-pytorch-should-be-installed-for-cuda12-2-why-cant-i-find-it-o), current CUDA 12.2 is not implemented yet. Or you can find your actual CUDA version [here](https://stackoverflow.com/questions/9727688/how-to-get-the-cuda-version) – Sunyong Seo Aug 10 '23 at 03:25