I have several GPUs but I only want to use one GPU for my training. I am using following options:
config = tf.ConfigProto(allow_soft_placement=True, log_device_placement=True)
config.gpu_options.allow_growth = True
with tf.Session(config=config) as sess:
Despite setting / using all these options, all of my GPUs allocate memory and
#processes = #GPUs
How can I prevent this from happening?
Note
- I do not want use set the devices manually and I do not want to set
CUDA_VISIBLE_DEVICES
since I want tensorflow to automatically find the best (an idle) GPU available - When I try to start another
run
it uses the same GPU that is already used by another tensorflow process even though there are several other free GPUs (apart from the memory allocation on them) - I am running tensorflow in a docker container:
tensorflow/tensorflow:latest-devel-gpu-py