I have installed the pytorch, and would like to check are there any script to test whether the installation is correct, e.g., whether it can enable CUDA or not, etc?
-
Possible duplicate of [How to check if pytorch is using the GPU?](https://stackoverflow.com/questions/48152674/how-to-check-if-pytorch-is-using-the-gpu) – prosti Sep 18 '19 at 11:30
4 Answers
Coming to your 1st question, In your python script.... just add
import torch
if this gives "ModuleNotFoundError: No module named 'torch'", then your pytorch installation is not complete
And your 2nd question to check if your pytorch is using cuda,use this
torch.cuda.is_available()
this will return True if your pytorch is using cuda

- 5,128
- 3
- 21
- 32
-
This alone will not verify that pytorch will work. See [this issue](https://github.com/pytorch/pytorch/issues/31285) – Pro Q May 09 '22 at 07:58
-
-
Given the comments on [the issue](https://github.com/pytorch/pytorch/issues/31285), and my own personal case, it seems that for any GPU that has `sm_86` capabilities, this script will run successfully, but will end up failing when you try to run real code/do ML training. [Bai's answer](https://stackoverflow.com/a/71988947/5049813), which includes more tests of the true functionality, will fail if attempted on these sorts of GPUs. – Pro Q May 09 '22 at 10:29
You can use the collect_env.py
script provided in the PyTorch utils folder.
Its output is as follows:
Collecting environment information...
PyTorch version: 1.2.0
Is debug build: No
CUDA used to build PyTorch: 10.0.130
OS: Ubuntu 16.04.6 LTS
GCC version: (Ubuntu 5.4.0-6ubuntu1~16.04.11) 5.4.0 20160609
CMake version: version 3.14.6
Python version: 3.7
Is CUDA available: Yes
CUDA runtime version: Could not collect
GPU models and configuration: GPU 0: GeForce RTX 2080
Nvidia driver version: 410.48
cuDNN version: /usr/lib/x86_64-linux-gnu/libcudnn.so.7.4.1
Versions of relevant libraries:
[pip] numpy==1.16.4
[pip] torch==1.2.0
[pip] torchsample==0.1.3
[pip] torchsummary==1.5.1
[pip] torchvision==0.4.0a0+6b959ee
[conda] blas 1.0 mkl
[conda] mkl 2019.4 243
[conda] mkl-service 2.0.2 py37h7b6447c_0
[conda] mkl_fft 1.0.14 py37ha843d7b_0
[conda] mkl_random 1.0.2 py37hd81dba3_0
[conda] pytorch 1.2.0 py3.7_cuda10.0.130_cudnn7.6.2_0 pytorch
[conda] torchsample 0.1.3 pypi_0 pypi
[conda] torchsummary 1.5.1 pypi_0 pypi
[conda] torchvision 0.4.0 py37_cu100 pytorch

- 11,362
- 10
- 52
- 71
If you installed it from here you are doing fine.
Check this:
import torch
dev = torch.device("cuda") if torch.cuda.is_available() else torch.device("cpu")
print(dev)
If you have your GPU installed correctly you should have nvidia-smi
.
(On Windows it should be inside C:\Program Files\NVIDIA Corporation\NVSMI
)

- 42,291
- 14
- 186
- 151
You can use my code. This repo implements a set of PyTorch environment checker and cuda-based operators, which helps you verify whether your GPU-based PyTorch is installed properly.

- 11
- 1