I want to limit the memory usage per gpu. As suggested in this answer, I do as following:
config = tf.ConfigProto(allow_soft_placement=True, gpu_options=tf.GPUOptions(per_process_gpu_memory_fraction=0.9))
saver = tf.train.Saver()
sv = tf.train.Supervisor(logdir=FLAGS.log_root,
is_chief=True,
saver=saver,
summary_op=None,
save_summaries_secs=60,
save_model_secs=FLAGS.checkpoint_secs,
global_step=model.global_step)
sess = sv.prepare_or_wait_for_session(config=config)
But it still does not work (the GPU-Util
of one of the GPUs has achieved to 100%). Could you please tell me how to fix this issue? Thanks in advance!