15

Suppose I have a system with a single GPU installed, and suppose I've also installed a recent version of CUDA.

I want to determine what's the compute capability of my GPU. If I could compile code, that would be easy:

#include <stdio.h>
int main() {
    cudaDeviceProp prop;
    cudaGetDeviceProperties(&prop, 0);
    printf("%d", prop.major * 10 + prop.minor);
}

but - suppose I want to do that without compiling. Can I? I thought nvidia-smi might help me, since its lets you query all sorts of information about devices, but it seems it doesn't let you obtain the compute capability. Maybe there's something else I can do? Maybe something visible via /proc or system logs?

Edit: This is intended to run before a build, on a system which I don't control. So it must have minimal dependencies, run on a command-line and not require root privileges.

einpoklum
  • 118,144
  • 57
  • 340
  • 684
  • 1
    so you just want to execute a shell script? what do you do with that information once you have it? can't you copy your executable onto that system? – m.s. Nov 19 '16 at 16:58
  • A best practice when installing CUDA is to compile the sample codes - it's fairly trivial to do. If they are compiled (i.e. "prebuilt"), then you can run `deviceQuery`. – Robert Crovella Nov 19 '16 at 16:58
  • @RobertCrovella: I can't assume people will have the samples 1. installed and 2. compiled ... – einpoklum Nov 19 '16 at 17:00
  • 2
    Then drop your own executable on the system, as suggested by @m.s. The `deviceQuerydrv` executable does not even require that CUDA be installed (although it does require that a proper GPU driver be installed). If you're going to run this before a build, apparently you have a method to get the files to be built on the system in question - include your own utility. – Robert Crovella Nov 19 '16 at 17:01
  • @RobertCrovella: I can't start distribution binaries... the best I can do now _is_ using compilation, see [here](http://stackoverflow.com/a/40665580/1593077). – einpoklum Nov 19 '16 at 17:05

3 Answers3

20

We can use nvidia-smi --query-gpu=compute_cap --format=csv to get the compute capability.

Sample output:

compute_cap
8.6

It is available for cuda tool kit 11.6.

idy002
  • 301
  • 2
  • 4
9

Unfortunately, it looks like the answer at the moment is "No", and that one needs to either compile a program or use a binary compiled elsewhere.

Edit: I have adapted a workaround for this issue - a self-contained bash script which compiles a small built-in C program to determine the compute capability. (It is particualrly useful to call from with CMake, but can just run independently.)

Also, I've filed a feature-requesting bug report at nVIDIA about this.

Here's the script, in a version assuming that nvcc is on your path:

/usr/bin/env nvcc --run "$0" ${1:+--run-args "${@:1}"} ; exit $?
#include <cstdio>
#include <cstdlib>
#include <cuda_runtime_api.h>

int main(int argc, char *argv[])
{
    cudaDeviceProp prop;
    cudaError_t status;
    int device_count;
    int device_index = 0;
    if (argc > 1) {
        device_index = atoi(argv[1]);
    }

    status = cudaGetDeviceCount(&device_count);
    if (status != cudaSuccess) {
        fprintf(stderr,"cudaGetDeviceCount() failed: %s\n", cudaGetErrorString(status));
        return -1;
    }
    if (device_index >= device_count) {
        fprintf(stderr, "Specified device index %d exceeds the maximum (the device count on this system is %d)\n", device_index, device_count);
        return -1;
    }
    status = cudaGetDeviceProperties(&prop, device_index);
    if (status != cudaSuccess) {
        fprintf(stderr,"cudaGetDeviceProperties() for device device_index failed: %s\n", cudaGetErrorString(status));
        return -1;
    }
    int v = prop.major * 10 + prop.minor;
    printf("%d\n", v);
}
einpoklum
  • 118,144
  • 57
  • 340
  • 684
  • 3
    I would suggest filing an RFE with NVIDIA to have reporting of compute capability added to `nvidia-smi`. `--query-gpu` can report numerous device properties, but not the compute capability, which seems like an oversight. They should support `--query-gpu=compute_capability`, which would make your scripting task trivial. – njuffa Nov 20 '16 at 09:02
  • That seems like the way to go, thanks. Two minor notes: (1) in the NVIDIA bug reporting system, all reports are confidential and only visible to the filer and engineers addressing the issue, so providing a link does not help; you might want to mention the bug number instead in case people want to compare notes. (2) The correct spelling of the company's name is NVIDIA, i.e. all-caps. – njuffa Dec 04 '16 at 20:09
  • @njuffa: Are you sure that's the correct spelling? Is there someplace official that says that? – einpoklum Apr 15 '17 at 20:40
  • "NVIDIA should always appear in uppercase" [NVIDIA's Trademark and Copyright Guidelines, section "Use of NVIDIA Word mark"; latest version I could find on the double](http://international.download.nvidia.com/partnerforce-us/Brand-Guidelines/NVIDIA_Trademark_Guidelines_2013.pdf) – njuffa Apr 15 '17 at 22:27
  • @njuffa: But that can't be right... I mean, both the [original](https://upload.wikimedia.org/wikipedia/en/6/69/Nvidia_old_logo.svg) and the [current](https://upload.wikimedia.org/wikipedia/sco/2/21/Nvidia_logo.svg) logos use a lowercase n! – einpoklum Apr 15 '17 at 22:36
  • 1
    When you are typing your are not using the logo but the word mark. This is off-topic. – njuffa Apr 15 '17 at 22:55
  • 3
    Does anyone know if this (--query-gpu=compute_capability) was ever implemented? – ThatsRightJack Sep 21 '19 at 06:56
  • It seems absent from v450.80.02 on Ubuntu 20.04. – user2023370 Nov 03 '20 at 14:28
6

You can use deviceQuery utility included in cuda installation

# change cwd into utility source directoy
$ cd /usr/local/cuda/samples/1_Utilities/deviceQuery

# build deviceQuery utility with make as root
$ sudo make

# run deviceQuery
$ ./deviceQuery  | grep Capability
  CUDA Capability Major/Minor version number:    7.5

# optionally copy deviceQuery in ~/bin for future use
$ cp ./deviceQuery ~/bin

Full ouput from deviceQuery with RTX2080Ti is follows:

 $ ./deviceQuery
./deviceQuery Starting...

 CUDA Device Query (Runtime API) version (CUDART static linking)

Detected 1 CUDA Capable device(s)

Device 0: "GeForce RTX 2080 Ti"
  CUDA Driver Version / Runtime Version          11.2 / 10.2
  CUDA Capability Major/Minor version number:    7.5
  Total amount of global memory:                 11016 MBytes (11551440896 bytes)
  (68) Multiprocessors, ( 64) CUDA Cores/MP:     4352 CUDA Cores
  GPU Max Clock rate:                            1770 MHz (1.77 GHz)
  Memory Clock rate:                             7000 Mhz
  Memory Bus Width:                              352-bit
  L2 Cache Size:                                 5767168 bytes
  Maximum Texture Dimension Size (x,y,z)         1D=(131072), 2D=(131072, 65536), 3D=(16384, 16384, 16384)
  Maximum Layered 1D Texture Size, (num) layers  1D=(32768), 2048 layers
  Maximum Layered 2D Texture Size, (num) layers  2D=(32768, 32768), 2048 layers
  Total amount of constant memory:               65536 bytes
  Total amount of shared memory per block:       49152 bytes
  Total number of registers available per block: 65536
  Warp size:                                     32
  Maximum number of threads per multiprocessor:  1024
  Maximum number of threads per block:           1024
  Max dimension size of a thread block (x,y,z): (1024, 1024, 64)
  Max dimension size of a grid size    (x,y,z): (2147483647, 65535, 65535)
  Maximum memory pitch:                          2147483647 bytes
  Texture alignment:                             512 bytes
  Concurrent copy and kernel execution:          Yes with 3 copy engine(s)
  Run time limit on kernels:                     No
  Integrated GPU sharing Host Memory:            No
  Support host page-locked memory mapping:       Yes
  Alignment requirement for Surfaces:            Yes
  Device has ECC support:                        Disabled
  Device supports Unified Addressing (UVA):      Yes
  Device supports Compute Preemption:            Yes
  Supports Cooperative Kernel Launch:            Yes
  Supports MultiDevice Co-op Kernel Launch:      Yes
  Device PCI Domain ID / Bus ID / location ID:   0 / 1 / 0
  Compute Mode:
     < Default (multiple host threads can use ::cudaSetDevice() with device simultaneously) >

deviceQuery, CUDA Driver = CUDART, CUDA Driver Version = 11.2, CUDA Runtime Version = 10.2, NumDevs = 1
Result = PASS

Thanks.

Hongsoog
  • 543
  • 4
  • 9
  • For our cuda 11.7 installation I found deviceQuery in the `extras/demo_suite` directory: `/extras/demo_suite/deviceQuery` – Niklas Feb 15 '23 at 15:39