26

I have a workstation with 2 GPUs and I am trying to run multiple tensorflow jobs at the same time, so I can train more than one model at once, etc.

For example, I've tried to separate the sessions into different resources via the python API using in script1.py:

with tf.device("/gpu:0"):
    # do stuff

in script2.py:

with tf.device("/gpu:1"):
    # do stuff

in script3.py

with tf.device("/cpu:0"):
    # do stuff

If I run each script by itself I can see that it is using the specified device. (Also the models fit very well into a single GPU and doesn't use another one even if both are available.)

However, if one script is running and I try to run another, I always get this error:

I tensorflow/core/common_runtime/local_device.cc:40] Local device intra op parallelism threads: 8
I tensorflow/stream_executor/cuda/cuda_gpu_executor.cc:909] successful NUMA node read from SysFS had negative value (-1), but there must be at least one NUMA node, so returning NUMA node zero
I tensorflow/core/common_runtime/gpu/gpu_init.cc:103] Found device 0 with properties: 
name: GeForce GTX 980
major: 5 minor: 2 memoryClockRate (GHz) 1.2155
pciBusID 0000:01:00.0
Total memory: 4.00GiB
Free memory: 187.65MiB
I tensorflow/stream_executor/cuda/cuda_gpu_executor.cc:909] successful NUMA node read from SysFS had negative value (-1), but there must be at least one NUMA node, so returning NUMA node zero
I tensorflow/core/common_runtime/gpu/gpu_init.cc:103] Found device 1 with properties: 
name: GeForce GTX 980
major: 5 minor: 2 memoryClockRate (GHz) 1.2155
pciBusID 0000:04:00.0
Total memory: 4.00GiB
Free memory: 221.64MiB
I tensorflow/core/common_runtime/gpu/gpu_init.cc:127] DMA: 0 1 
I tensorflow/core/common_runtime/gpu/gpu_init.cc:137] 0:   Y Y 
I tensorflow/core/common_runtime/gpu/gpu_init.cc:137] 1:   Y Y 
I tensorflow/core/common_runtime/gpu/gpu_device.cc:702] Creating    TensorFlow device (/gpu:0) -> (device: 0, name: GeForce GTX 980, pci bus id: 0000:01:00.0)
I tensorflow/core/common_runtime/gpu/gpu_device.cc:702] Creating TensorFlow device (/gpu:1) -> (device: 1, name: GeForce GTX 980, pci bus id: 0000:04:00.0)
I tensorflow/core/common_runtime/gpu/gpu_bfc_allocator.cc:42] Allocating 187.40MiB bytes.
E tensorflow/stream_executor/cuda/cuda_driver.cc:932] failed to allocate 187.40M (196505600 bytes) from device: CUDA_ERROR_OUT_OF_MEMORY
F tensorflow/core/common_runtime/gpu/gpu_bfc_allocator.cc:47] Check failed: gpu_mem != nullptr  Could not allocate GPU device memory for device 0. Tried to allocate 187.40MiB
Aborted (core dumped)

It seems each tensorflow process is trying to grab all of the GPUs on the machine when it loads even if not all devices are going to be used to run the model.

I see there is an option to limit the amount of GPU each process uses

tf.GPUOptions(per_process_gpu_memory_fraction=0.5)

...I haven't tried it, but this seems like it would make two processes try to share 50% of each GPU instead of running each process on a separate GPU...

Does anyone know how to configure tensorflow to use only one GPU and leave the other available for another tensorflow process?

Cœur
  • 37,241
  • 25
  • 195
  • 267
j314erre
  • 2,737
  • 2
  • 19
  • 26

1 Answers1

60

TensorFlow will attempt to use (an equal fraction of the memory of) all GPU devices that are visible to it. If you want to run different sessions on different GPUs, you should do the following.

  1. Run each session in a different Python process.
  2. Start each process with a different value for the CUDA_VISIBLE_DEVICES environment variable. For example, if your script is called my_script.py and you have 4 GPUs, you could run the following:

    $ CUDA_VISIBLE_DEVICES=0 python my_script.py  # Uses GPU 0.
    $ CUDA_VISIBLE_DEVICES=1 python my_script.py  # Uses GPU 1.
    $ CUDA_VISIBLE_DEVICES=2,3 python my_script.py  # Uses GPUs 2 and 3.
    

    Note the GPU devices in TensorFlow will still be numbered from zero (i.e. "/gpu:0" etc.), but they will correspond to the devices that you have made visible with CUDA_VISIBLE_DEVICES.

mrry
  • 125,488
  • 26
  • 399
  • 400
  • 1
    Perfect. This works great. I was using different processes anyway. Now I'm able to run separate processes on different GPUs and am training multiple models at the same time. Also with this method there is no need to specify the tf.device in the code as tensorflow will automatically make use of CUDA_VISIBLE_DEVICES accordingly. Thanks. – j314erre Jan 14 '16 at 01:20
  • 2
    Within a Jupyter notebook use `os.environ["CUDA_DEVICE_ORDER"]="PCI_BUS_ID"` and `os.environ["CUDA_VISIBLE_DEVICES"]="0"` http://stackoverflow.com/questions/37893755/tensorflow-set-cuda-visible-devices-within-jupyter – Matt Kleinsmith Mar 16 '17 at 17:06
  • 2
    If we ever need to know how many GPUs are available to use, we can run: `nvidia-smi -L`. Then we can be sure of how many CUDA devices we _could_ make visible. – Pablo Rivas Mar 28 '17 at 21:53
  • @mrry does this run in parallel or serial ? – Coddy Feb 17 '20 at 17:32