25

From the experiments I run, it seems like TensorFlow uses automatically all CPUs on one machine. Furthermore, it seems like TensorFlow refers to all CPUs as /cpu:0.

Am I right, that only the different GPUs of one machine get indexed and viewed as separate devices, but all the CPUs on one machine get viewed as a single device?

Is there any way that a machine can have multiple CPUs viewing it from TensorFlows perspective?

PaulWen
  • 1,025
  • 1
  • 13
  • 24

1 Answers1

39

By default all CPUs available to the process are aggregated under cpu:0 device.

There's answer by mrry here showing how to create logical devices like /cpu:1, /cpu:2

There doesn't seem to be working functionality to pin logical devices to specific physical cores or be able to use NUMA nodes in tensorflow.

A possible work-around is to use distributed TensorFlow with multiple processes on one machine and use taskset on Linux to pin specific processes to specific cores

Community
  • 1
  • 1
Yaroslav Bulatov
  • 57,332
  • 22
  • 139
  • 197
  • can you give me a hint (source code location) to where Tensorflow, uses multi cores of "one" CPU to run one partition of a graph? Because as far as I understood, TF divides the graph into partitions and each partition will be run on one device by its corresponding executor. But my question is how cpu with multi cores executes a partition? – Hamed Oct 31 '19 at 18:54
  • I want TensorFlow not to see/know other cores except one. How can I do that? – fisakhan Sep 30 '20 at 13:32