2

I'm working on a "fujitsu" machine. It has 2 GPUs installed: Quadro 2000 and Tesla C2075. The Quadro GPU has 1 GB RAM and Tesla GPU has 5GB of it. (I checked using the output of nvidia-smi -q). When I run nvidia-smi, the output shows 2 GPUs, but the Tesla ones display is shown as off. I'm running a memory intensive program and would like to use 5 GB of RAM available, but whenever I run a program, it seems to be using the Quadro GPU. Is there some way to use a particular GPU out of the 2 in a program? Does the Tesla GPU being "disabled" means it's drivers are not installed?

wp78de
  • 18,207
  • 7
  • 43
  • 71
pymd
  • 4,021
  • 6
  • 26
  • 27
  • Are you trying to trigger this programmatically from your own application? Or is this a general question about the machine? – Dan Harris Mar 03 '14 at 11:32
  • @DanHarris It's a general question. I just want to know how do I use the 5GB GPU installed on my machine? – pymd Mar 03 '14 at 11:35
  • I would say this question would be better on http://superuser.com/ as Stack Overflow is for programming questions. SuperUser is better suited to asking questions such as this. – Dan Harris Mar 03 '14 at 11:37
  • 1
    This question is fine here. The answer to this question is useful to CUDA programmers. – harrism Mar 03 '14 at 12:23
  • In general you can use `cudaSetDevice`. It will need a number, which is the GPU ID you want to run the computations on. Which number refers to which card can be determined according to the accepted answer to this post [How to get card specs programatically in CUDA](http://stackoverflow.com/questions/5689028/how-to-get-card-specs-programatically-in-cuda). Could you please clarify what do you mean by _but the Tesla ones display is shown as off_? – Vitality Mar 03 '14 at 12:30

1 Answers1

9

You can control access to CUDA GPUs either using the environment or programmatically.

You can use the environment variable CUDA_VISIBLE_DEVICES to specify a list of 1 or more GPUs that will be visible to any application, as well as their order of visibility. For example if nvidia-smi reports your Tesla GPU as GPU 1 (and your Quadro as GPU 0), then you can set CUDA_VISIBLE_DEVICES=1 to enable only the Tesla to be used by CUDA code.

See my blog post on the subject.

To control what GPU your application uses programmatically, you should use the device management API of CUDA. Query the number of devices using cudaGetDeviceCount, then you can cudaSetDevice to each device, query its properties using cudaGetDeviceProperties, and then select the device that fits your application criteria. You can also use cudaChooseDevice to select the device that most closely matches the device properties you specify.

harrism
  • 26,505
  • 2
  • 57
  • 88