112

For debugging CUDA code and checking compatibilities I need to find out what nvidia driver version for the GPU I have installed. I found How to get the cuda version? but that does not help me here.

Community
  • 1
  • 1
Framester
  • 33,341
  • 51
  • 130
  • 192

11 Answers11

154

Using nvidia-smi should tell you that:

bwood@mybox:~$ nvidia-smi 
Mon Oct 29 12:30:02 2012       
+------------------------------------------------------+                       
| NVIDIA-SMI 3.295.41   Driver Version: 295.41         |                       
|-------------------------------+----------------------+----------------------+
| Nb.  Name                     | Bus Id        Disp.  | Volatile ECC SB / DB |
| Fan   Temp   Power Usage /Cap | Memory Usage         | GPU Util. Compute M. |
|===============================+======================+======================|
| 0.  GeForce GTX 580           | 0000:25:00.0  N/A    |       N/A        N/A |
|  54%   70 C  N/A   N/A /  N/A |  25%  383MB / 1535MB |  N/A      Default    |
|-------------------------------+----------------------+----------------------|
| Compute processes:                                               GPU Memory |
|  GPU  PID     Process name                                       Usage      |
|=============================================================================|
|  0.           Not Supported                                                 |
+-----------------------------------------------------------------------------+
Brendan Wood
  • 6,220
  • 3
  • 30
  • 28
  • 2
    In my centos 6.4 system, it gives me error as "-bash: nvidia-smi: command not found". What might be the problem ? – Shyamkkhadka Mar 05 '17 at 10:03
  • @Shyamkkhadka Likely something wrong with your PATH. You could try to find `nvidia-smi` like so: `locate nvidia-smi` – Brendan Wood Mar 10 '17 at 15:46
  • @BrendanWood, with locate command it shows blank output. I suspect if it has no gpu hardware either. Because it is HPC. And I am accessing it from remote, with ssh. – Shyamkkhadka Mar 10 '17 at 18:21
  • @Shyamkkhadka Yes, that's probably it. HPC generally don't have GPUs unless they are supposed to be a GPU cluster. You can check available hardware with `lspci`. For example: http://stackoverflow.com/questions/10310250/how-to-check-for-gpu-on-centos-linux – Brendan Wood Mar 10 '17 at 19:25
  • @BrendanWood, As suggested in your link, when I did "lspci | grep VGA". It shows output as "lspci | grep VGA 01:03.0 VGA compatible controller: Advanced Micro Devices, Inc. [AMD/ATI] ES1000 (rev 02)". So I think it has GPU hardware. – Shyamkkhadka Mar 11 '17 at 21:22
117

On any linux system with the NVIDIA driver installed and loaded into the kernel, you can execute:

cat /proc/driver/nvidia/version

to get the version of the currently loaded NVIDIA kernel module, for example:

$ cat /proc/driver/nvidia/version 
NVRM version: NVIDIA UNIX x86_64 Kernel Module  304.54  Sat Sep 29 00:05:49 PDT 2012
GCC version:  gcc version 4.6.3 (Ubuntu/Linaro 4.6.3-1ubuntu5) 
talonmies
  • 70,661
  • 34
  • 192
  • 269
  • 5
    Or if you have Bumblebee installed (due to NVIDIA Optimus dual GPU), then run this instead: "optirun cat /proc/driver/nvidia/version" – Shervin Emami Sep 07 '13 at 11:47
  • 7
    This is especially useful when the output of `nvidia-smi` is: `Failed to initialize NVML: GPU access blocked by the operating system` – DarioP Apr 24 '15 at 10:01
  • 1
    In my centos 6.4 system, I don't have directory nvidia inside /proc/driver. What might be the problem ? Due to this, I am not able to see my nvidia driver version. – Shyamkkhadka Mar 05 '17 at 10:04
  • 2
    Also useful when you get the output `Failed to initialize NVML: Driver/library version mismatch` from `nvidia-smi`. – Sethos II Jul 10 '20 at 09:00
17

modinfo does the trick.

root@nyx:/usr/src# modinfo nvidia|grep version:
version:        331.113
Michael
  • 7,316
  • 1
  • 37
  • 63
  • 4
    On my system the module was named `nvidia_XXX` corresponding to the major driver series I had installed, and since `modinfo` doesn't support wildcards or partial name matches I had to do this `modinfo $(find /lib/modules/$(uname -r) -iname nvidia_*.ko | head -1) | grep ^version:` which returns the correct major and minor driver version. – dragon788 Jul 12 '17 at 23:20
  • 2
    On ubuntu 18.04 my version of `modinfo` has a `--field` command line option. So you can skip the grep: `modinfo nvidia --field version`. Also, in ubuntu 16.04 this doesn't seem to work. I always get `ERROR: Module nvidia not found`. – cheshirekow Jun 20 '19 at 20:38
  • modinfo shows a different version from the /proc/driver/nvidia/version file. I suppose it reads the version from the module file, not from the one actually in use. I just installed the new driver and I still have to reboot. – lorenzo Mar 10 '21 at 15:41
  • For Ubuntu/Debian you can do ```sudo modinfo nvidia-current --field version``` – sgtcoder Nov 26 '21 at 17:27
13

Windows version:

cd \Program Files\NVIDIA Corporation\NVSMI

nvidia-smi

Community
  • 1
  • 1
ccc
  • 139
  • 1
  • 2
13
nvidia-smi --query-gpu=driver_version --format=csv,noheader --id=0

returns result as a string that doesn't require further parsing like: 470.82.00

In case nvidia-smi is not available for some reason, information can be obtained by calling into driver APIs. Driver libraries can be loaded using Python ctypes library.

For CUDA see: https://gist.github.com/f0k/63a664160d016a491b2cbea15913d549

For driver information see: https://github.com/mars-project/mars/blob/a50689cda4376d82a40b7aa9833f572299db7efd/mars/lib/nvutils.py

Aleksey Vlasenko
  • 990
  • 9
  • 10
7

[NOTE: I am not deleting my answer on purpose, so people see how not to do it]

If you use:

me@over_there:~$  dpkg --status nvidia-current | grep Version | cut -f 1 -d '-' | sed 's/[^.,0-9]//g'
260.19.06

you will get the version of the nVIDIA driver package installed through your distribution's packaging mechanism. But this may not be the version that is actually running as part of your kernel right now.

einpoklum
  • 118,144
  • 57
  • 340
  • 684
Framester
  • 33,341
  • 51
  • 130
  • 192
  • 11
    That doesn't tell you what version of the driver is actually installed and in use by the kernel. Use the proc file system to see that.... Also this only works in debian style distributions. – talonmies Oct 29 '12 at 16:31
  • 2
    @Framester thanks for leaving this up - that is the first thing that I'd have done (and its wrong!) – S. Dixon Nov 09 '14 at 20:13
  • @Framester: You totally gamed the system! I just gave you another +1 on a useful wrong answer... you cunning devil :-) – einpoklum Apr 09 '17 at 20:35
4

To expand on ccc's answer, if you want to incorporate querying the card with a script, here is information on Nvidia site on how to do so:

https://nvidia.custhelp.com/app/answers/detail/a_id/3751/~/useful-nvidia-smi-queries

Also, I found this thread researching powershell. Here is an example command that runs the utility to get the true memory available on the GPU to get you started.

# get gpu metrics
$cmd = "& 'C:\Program Files\NVIDIA Corporation\NVSMI\nvidia-smi' --query-gpu=name,utilization.memory,driver_version --format=csv"
$gpuinfo = invoke-expression $cmd | ConvertFrom-CSV
$gpuname = $gpuinfo.name
$gpuutil = $gpuinfo.'utilization.memory [%]'.Split(' ')[0]
$gpuDriver = $gpuinfo.driver_version
Jeff Blumenthal
  • 442
  • 6
  • 8
0

If you need to get that in a program with Python on a Linux system for reproducibility:

with open('/proc/driver/nvidia/version') as f:
    version = f.read().strip()
print(version)

gives:

NVRM version: NVIDIA UNIX x86_64 Kernel Module  384.90  Tue Sep 19 19:17:35 PDT 2017
GCC version:  gcc version 5.4.0 20160609 (Ubuntu 5.4.0-6ubuntu1~16.04.5) 
Martin Thoma
  • 124,992
  • 159
  • 614
  • 958
0

nvidia-container-cli info is one of the other commands. Below is an example of running my environment.

⋊> ~ nvidia-container-cli info                                               18:32:30
NVRM version:   465.19.01
CUDA version:   11.3

Device Index:   0
Device Minor:   0
Model:          NVIDIA TITAN X (Pascal)
Brand:          GeForce
GPU UUID:       GPU-fcae2b3c-b6c0-c0c6-1eef-4f25809d16f9
Bus Location:   00000000:01:00.0
Architecture:   6.1
⋊> ~                                                                         18:32:30
Keiku
  • 8,205
  • 4
  • 41
  • 44
0

Try to use this, if all the GPUs are using the same driver.

nvidia-smi --query-gpu=driver_version --format=csv | tail -n 1
ys_huang
  • 83
  • 1
  • 5
0

Yet another alternative, this one is useful if nvidia-smi is unavailable (e.g. if you have installed drivers via akmod-nvidia from RPM Fusion).

nvidia-settings -q NvidiaDriverVersion

returns:

Attribute 'NvidiaDriverVersion' (fedora:0[gpu:0]): 530.41.03

Or just returning the value add -t for terse:

nvidia-settings -q NvidiaDriverVersion -t

returns:

530.41.03

cddt
  • 539
  • 5
  • 14