148

In a multi-GPU computer, how do I designate which GPU a CUDA job should run on?

As an example, when installing CUDA, I opted to install the NVIDIA_CUDA-<#.#>_Samples then ran several instances of the nbody simulation, but they all ran on one GPU 0; GPU 1 was completely idle (monitored using watch -n 1 nvidia-dmi). Checking CUDA_VISIBLE_DEVICES using

echo $CUDA_VISIBLE_DEVICES

I found this was not set. I tried setting it using

CUDA_VISIBLE_DEVICES=1

then running nbody again but it also went to GPU 0.

I looked at the related question, how to choose designated GPU to run CUDA program?, but deviceQuery command is not in the CUDA 8.0 bin directory. In addition to $CUDA_VISIBLE_DEVICES$, I saw other posts refer to the environment variable $CUDA_DEVICES but these were not set and I did not find information on how to use it.

While not directly related to my question, using nbody -device=1 I was able to get the application to run on GPU 1 but using nbody -numdevices=2 did not run on both GPU 0 and 1.

I am testing this on a system running using the bash shell, on CentOS 6.8, with CUDA 8.0, 2 GTX 1080 GPUs, and NVIDIA driver 367.44.

I know when writing using CUDA you can manage and control which CUDA resources to use but how would I manage this from the command line when running a compiled CUDA executable?

Community
  • 1
  • 1
Steven C. Howell
  • 16,902
  • 15
  • 72
  • 97
  • The `nbody` application has a command line option to select the GPU to run on - you might want to study that code. For the more general case, `CUDA_VISIBLE_DEVICES` should work. If it does not, you're probably not using it correctly, and you should probably give a complete example of what you have tried. You should also indicate what OS you are working on and for linux, what shell (e.g. bash, csh, etc.). `deviceQuery` isn't necessary to any of this, it's just an example app to demonstrate the behavior of `CUDA_VISIBLE_DEVICES`. The proper environment variable name doesn't have a `$` in it. – Robert Crovella Sep 22 '16 at 21:30
  • 8
    You'll need to learn more about the bash shell you are using. This: `CUDA_VISIBLE_DEVICES=1` doesn't permanently set the environment variable (in fact, if that's all you put on that command line, it really does nothing useful.). This: `export CUDA_VISIBLE_DEVICES=1` will permanently set it for the remainder of that session. You may want to study how environment variables work in bash, and how various commands affect them, and for how long. – Robert Crovella Sep 23 '16 at 03:32
  • 2
    `deviceQuery` is provided with CUDA 8, but you have to build it. If you read the CUDA 8 installation guide for linux, it will explain how to build `deviceQuery` – Robert Crovella Sep 23 '16 at 03:35
  • In /usr/local/cuda/bin, there is a cuda-install-samples-.sh script, which you can use, if the samples were not installed. Then, in the 1_Utilities, folder, in the NVIDIA_Samples installation directory, you will find the deviceQuery. Just calling make in that folder will compile it for you. If I remember correctly, it will copy the binary in the same folder. – Mircea Aug 01 '18 at 09:24
  • 2
    Should it be `watch -n 1 nvidia-smi`... – oliversm Aug 31 '18 at 14:55
  • for random gpu you can do this: `export CUDA_VISIBLE_DEVICES=$((( RANDOM % 8 )))` – Charlie Parker Mar 11 '21 at 20:49

6 Answers6

223

The problem was caused by not setting the CUDA_VISIBLE_DEVICES variable within the shell correctly.

To specify CUDA device 1 for example, you would set the CUDA_VISIBLE_DEVICES using

export CUDA_VISIBLE_DEVICES=1

or

CUDA_VISIBLE_DEVICES=1 ./cuda_executable

The former sets the variable for the life of the current shell, the latter only for the lifespan of that particular executable invocation.

If you want to specify more than one device, use

export CUDA_VISIBLE_DEVICES=0,1

or

CUDA_VISIBLE_DEVICES=0,1 ./cuda_executable
Steven C. Howell
  • 16,902
  • 15
  • 72
  • 97
42

In case of someone else is doing it in Python and it is not working, try to set it before do the imports of pycuda and tensorflow.

I.e.:

import os
os.environ["CUDA_DEVICE_ORDER"] = "PCI_BUS_ID"
os.environ["CUDA_VISIBLE_DEVICES"] = "0"
...
import pycuda.autoinit
import tensorflow as tf
...

As can be seen here.

vvvvv
  • 25,404
  • 19
  • 49
  • 81
Lucas
  • 1,514
  • 3
  • 16
  • 23
  • 1
    This works great! I used it in terminal instead of python ```export CUDA_DEVICE_ORDER=PCI_BUS_ID``` and then ```export CUDA_VISIBLE_DEVICES=``` – Mann Dec 23 '20 at 14:17
20

You can also set the GPU in the command line so that you don't need to hard-code the device into your script (which may fail on systems without multiple GPUs). Say you want to run your script on GPU number 5, you can type the following on the command line and it will run your script just this once on GPU#5:

CUDA_VISIBLE_DEVICES=5, python test_script.py
Eduardo Barrera
  • 516
  • 7
  • 17
19

Set the following two environment variables:

NVIDIA_VISIBLE_DEVICES=$gpu_id
CUDA_VISIBLE_DEVICES=0

where gpu_id is the ID of your selected GPU, as seen in the host system's nvidia-smi (a 0-based integer) that will be made available to the guest system (e.g. to the Docker container environment).

You can verify that a different card is selected for each value of gpu_id by inspecting Bus-Id parameter in nvidia-smi run in a terminal in the guest system).

More info

This method based on NVIDIA_VISIBLE_DEVICES exposes only a single card to the system (with local ID zero), hence we also hard-code the other variable, CUDA_VISIBLE_DEVICES to 0 (mainly to prevent it from defaulting to an empty string that would indicate no GPU).

Note that the environmental variable should be set before the guest system is started (so no chances of doing it in your Jupyter Notebook's terminal), for instance using docker run -e NVIDIA_VISIBLE_DEVICES=0 or env in Kubernetes or Openshift.

If you want GPU load-balancing, make gpu_id random at each guest system start.

If setting this with python, make sure you are using strings for all environment variables, including numerical ones.

You can verify that a different card is selected for each value of gpu_id by inspecting nvidia-smi's Bus-Id parameter (in a terminal run in the guest system).

The accepted solution based on CUDA_VISIBLE_DEVICES alone does not hide other cards (different from the pinned one), and thus causes access errors if you try to use them in your GPU-enabled python packages. With this solution, other cards are not visible to the guest system, but other users still can access them and share their computing power on an equal basis, just like with CPU's (verified).

This is also preferable to solutions using Kubernetes / Openshift controlers (resources.limits.nvidia.com/gpu), that would impose a lock on the allocated card, removing it from the pool of available resources (so the number of containers with GPU access could not exceed the number of physical cards).

This has been tested under CUDA 8.0, 9.0, 10.1, and 11.2 in docker containers running Ubuntu 18.04 or 20.04 and orchestrated by Openshift 3.11.

mirekphd
  • 4,799
  • 3
  • 38
  • 59
4

Update

Below in the comments there is a modified solution by lukaszzenko that uses the same idea and results in the same output. Consider using that instead, as it is more concise:

export CUDA_VISIBLE_DEVICES=$(nvidia-smi --query-gpu=memory.free,index --format=csv,nounits,noheader | sort -nr | head -1 | awk '{ print $NF }')

Choose GPU with lowest utilization (original solution)

After making xml2json available in your path you can select the N GPU(s) that have the lowest utilization:

export CUDA_VISIBLE_DEVICES=$(nvidia-smi -x -q | xml2json | jq '.' | python -c 'import json;import sys;print(",".join([str(gpu[0]) for gpu in sorted([(int(gpu["minor_number"]), float(gpu["utilization"]["gpu_util"].split(" ")[0])) for gpu in json.load(sys.stdin)["nvidia_smi_log"]["gpu"]], key=lambda x: x[1])[:2]]))')

Just replace the [:2] by [:1] if you need a single GPU or any number according to you maximum number of available GPUs.

Jan
  • 2,025
  • 17
  • 27
  • 9
    Easier way of doing it would be `export CUDA_VISIBLE_DEVICES=$(nvidia-smi --query-gpu=memory.free,index --format=csv,nounits,noheader | sort -nr | head -1 | awk '{ print $NF }'`) – lukaszzenko Nov 15 '21 at 14:38
  • Hey there! Thanks for simplifying my original idea; your approach does make it more concise. For anyone finding this useful, consider upvoting the original post too. It helps in ensuring diverse methods are seen and appreciated. Great collaboration! – Jan Aug 18 '23 at 07:55
1

For a random gpu you can do this:

export CUDA_VISIBLE_DEVICES=$((( RANDOM % 8 )))
Charlie Parker
  • 5,884
  • 57
  • 198
  • 323