0

When training using TF Slim's train_image_classifier.py I would like to tell Slim to only allocate what GPU memory it needs, rather than allocating all the memory.

Were I using straight up TF and not Slim I could say this:

config = tf.ConfigProto()
config.gpu_options.allow_growth=True
sess = tf.Session(config=config)

Or even just this to put a hard cap on GPU memory use:

gpu_options = tf.GPUOptions(per_process_gpu_memory_fraction=0.333)
sess = tf.Session(config=tf.ConfigProto(gpu_options=gpu_options))

How can I tell Slim the same thing(s)?

My comprehension fail is that Slim seems to use it's own loop and I can't find docs on the nitty gritty of configuring the loop. So, even if someone could point me to good Slim docs that'd be fantastic.

Thanks in advance!

Eric M
  • 1,027
  • 2
  • 8
  • 21

1 Answers1

1

You can pass the allow_growth option via the session_config parameter that is passed to the train method as follow:

session_config = tf.ConfigProto()
session_config.gpu_options.allow_growth = True
slim.learning.train(..., session_config=session_config)

See tensorflow/contrib/slim/python/slim/learning.py#L615 and tensorflow #5530.

Pierre C.
  • 1,591
  • 2
  • 16
  • 29
  • 1
    Oops. Thanks indeed. That seems to be what I'm looking for. – Eric M Jun 02 '19 at 18:59
  • 1
    @SherloxFR since you're into this topic , may I draw your attention to my [question](https://stackoverflow.com/questions/56344611/how-can-take-advantage-of-multiprocessing-and-multithreading-in-deep-learning-us?) with **bounty**. – Mario Jun 02 '19 at 19:09
  • @Mario you absolutely can – Pierre C. Jun 02 '19 at 19:15
  • @SherloxFR it would be great if you can guide me. thanks for your quick consideration. – Mario Jun 02 '19 at 19:16