Is there a simple way to check if an NVIDIA GPU is available on my system using only standard libraries? I've already seen other answers where they recommend using PyTorch or TensorFlow but that's not what I'm looking for. I'd like to know how to do this on both Windows and Linux. Thanks!
Asked
Active
Viewed 1.1k times
7
-
You have a windows or linux system? – Alex Metsai May 12 '21 at 12:54
-
I have both and I'd like to know how to do that on both of them. I'll update my question, thanks for reminding me! – sgrontflix May 12 '21 at 12:56
-
1I don't think you are going to find a direct standard library solution, but if you used `platform` and `ctypes` you could roll your own. Seems like a lot of work that others have already potentially done and maintain such as `wmi` on windows. – JonSG May 12 '21 at 13:14
2 Answers
15
When you have Nvidia drivers installed, the command nvidia-smi
outputs a neat table giving you information about your GPU, CUDA, and driver setup.
By checking whether or not this command is present, one can know whether or not an Nvidia GPU is present.
Do note that this code will only work if both an Nvidia GPU and appropriate drivers are installed.
This code should work on both Linux and Windows, and the only library it uses is subprocess, which is a standard library.
import subprocess
try:
subprocess.check_output('nvidia-smi')
print('Nvidia GPU detected!')
except Exception: # this command not being found can raise quite a few different errors depending on the configuration
print('No Nvidia GPU in system!')

Briar Campbell
- 177
- 1
- 6
-
2Prefer `except Exception` over bare `except`! https://stackoverflow.com/questions/54948548/what-is-wrong-with-using-a-bare-except – ti7 May 12 '21 at 13:30
-
I completely forgot that was a thing! I'm so used to finding every possible exception from a block of code on every platform... Thanks! – Briar Campbell May 12 '21 at 13:38
5
Following code shows if cuda available. cuda is in contact with gpu
print(torch.cuda.is_available())
print(torch.backends.cudnn.enabled)

Rostam
- 359
- 1
- 5
-
`pip install torch==1.12+cpu -f https://download.pytorch.org/whl/cpu/torch_stable.html` – 蔡宗容 Sep 19 '22 at 06:33
-
But I don't know the difference between `torch.cuda.is_available()` and `torch.backends.cudnn.enabled`, could someone give the explanation? – 蔡宗容 Sep 19 '22 at 06:36