4

We had only one GPU installed with CUDA drivers and whenever one user runs the code, the whole memory is assigned to that user. And the other users are unable to use the GPU. Is there a way to get rid of this behavior?

vvvvv
  • 25,404
  • 19
  • 49
  • 81
Rajesh
  • 221
  • 4
  • 16

1 Answers1

7

If you are using keras, add this at the beginning of your script:

from keras import backend as K

config = tf.ConfigProto()
config.gpu_options.allow_growth=True
sess = tf.Session(config=config)
K.set_session(sess)

This will prevent tensorflow to take all the memory as can be seen here.

If you are using tensorflow without keras, add this:

config = tf.ConfigProto()
config.gpu_options.allow_growth = True
session = tf.Session(config=config, ...)

As shown here.

vvvvv
  • 25,404
  • 19
  • 49
  • 81
  • Thanks for your reply. But how to limit GPU with only running tensorflow (the code written in tensorflow only), without any environment of keras. – Rajesh Jan 10 '18 at 05:20
  • @Rajesh I modified my answer according to your needs – vvvvv Jan 10 '18 at 06:33
  • @vinzee What if I don't use `tf.Session` ? Because `tf2.1` allows us to make trians directly with `tf.Keras.Models.Compile` – L F Jun 15 '20 at 17:28