26

I have a shared machine with 64 cores on which I have a big pipeline of Keras functions that I want to run. The thing is that it seems that Keras automatically uses all the cores available and I can't do that.

I use Python and I want to run 67 neural networks in a for loop. I would like to use half of the available cores.

I can't find any way of limiting the number of cores in Keras... Do you have any clue?

petezurich
  • 9,280
  • 9
  • 43
  • 57
Mohamed AL ANI
  • 2,012
  • 1
  • 12
  • 29

2 Answers2

23

As @Yu-Yang suggested, I used these lines before each fit:

from keras import backend as K
K.set_session(K.tf.Session(config=K.tf.ConfigProto(intra_op_parallelism_threads=32,
                                                   inter_op_parallelism_threads=32)))

Check the CPU usage (htop) : htop example

Alpha
  • 7,586
  • 8
  • 59
  • 92
Mohamed AL ANI
  • 2,012
  • 1
  • 12
  • 29
  • 1
    For information, in order to complete the answer, here you are the tf ConfigProto, containing all the options https://github.com/tensorflow/tensorflow/blob/r1.4/tensorflow/core/protobuf/config.proto – Filippo Mazza Jan 18 '18 at 11:22
  • 3
    This doesn't seem to work for inference when loading a saved model. – Steven Dec 13 '18 at 01:41
2

As mentioned in this solution, (https://stackoverflow.com/a/54832345/5568660)

if you want to use this using Tensforflow or Tensorflow_gpu, you can directly use the tf.config and feed it to the session:

config = tf.ConfigProto(intra_op_parallelism_threads=32, 
                        inter_op_parallelism_threads=32, 
                        allow_soft_placement=True)

session = tf.Session(config=config)
Shalini Maiti
  • 346
  • 2
  • 5