10

Alright. I know that we can limit the number of cores used by a Keras (TF backend) model by using the following method:

 K.set_session(K.tf.Session(config=K.tf.ConfigProto(intra_op_parallelism_threads=2, inter_op_parallelism_threads=2,  device_count = {'CPU': 2})))

And we can specify individual tensor operations like this:

with tf.device('/cpu:0'):
    a = tf.constant([1.0, 2.0, 3.0, 4.0, 5.0, 6.0], shape=[2, 3], name='a')

But what if we want to specify a list of individual CPUs to be used by the Keras model?

Steven
  • 824
  • 1
  • 8
  • 23
  • Maybe this would help: https://stackoverflow.com/questions/38187808/how-can-i-run-tensorflow-on-one-single-core – Abhishek Singh Dec 16 '18 at 06:07
  • Hi @AbhishekSingh. No, unfortunately this thread is about liming the number of cores, not specifying which cores you want the processes to run on. – Steven Dec 17 '18 at 15:34

1 Answers1

3

I don't think you can change processor affinity in Tensorflow, that's the level of operating system.

However, Linux has an useful tool taskset to help you.

For example,

taskset --cpu-list 0,1 python3 main.py

will assign core 0 and core 1 to the process that runs python3 main.py.

You can verify that with htop.

Tay2510
  • 5,748
  • 7
  • 39
  • 58
  • 1
    Wow just when it took me hours to come up with this solution and finally did it, I read your answer... -.- Hopefully others will read it before they search, because this solution works wonderfully (+1!) – Markus Jun 21 '19 at 22:31