1

I'm using python35, keras 2.1.5 & Tensorflow GPU 1.8.0

I'm training multiple model at the same time (and coworkers also) on a cluster with 4 gpu.

I've tried the following code to select a GPU:

from keras import backend as K
import os

os.environ["CUDA_VISIBLE_DEVICES"]="0" #or 1, etc

K.tensorflow_backend._get_available_gpus()

Saddly, with this method I need to select manually an available GPU (using nvidia-smi, choosing one, and praying that between the launch with nohup and the nvidia command nobody started a training)

Is there a way to select automatically a GPU ?

And also, is there a way to not block access to this GPU for other user / program ?

--edit:--

for the second question, there is also another duplicate explaining how to allocate only a fraction of the GPU memory

Mael Abgrall
  • 441
  • 2
  • 6
  • 16
  • Have you checked this answer by Yaroslav Bulatov to ["Tensorflow on shared GPUs: how to automatically select the one that is unused"](https://stackoverflow.com/a/41638727/624547)? – benjaminplanche Jun 06 '18 at 13:13
  • 1
    @Aldream No, didn't see it before, thanks, I'll mark it as duplicate – Mael Abgrall Jun 06 '18 at 13:16

0 Answers0