3

as far as I understood TensorFlow creates one device per core. (source: https://github.com/samjabrahams/tensorflow-white-paper-notes: NOTE: To reiterate- in this context, "single device" means using a single CPU core or single GPU, not a single machine. Similarly, "multi-device" does not refer to multiple machines, but to multiple CPU cores and/or GPUs. See "3.3 Distributed Execution" for multiple machine discussion.)

My computer has four cores but it only recognises one:

>>> from tensorflow.python.client import device_lib 
>>> print(device_lib.list_local_devices())
[name: "/cpu:0"
device_type: "CPU"
memory_limit: 268435456
bus_adjacency: BUS_ANY
incarnation: 13835232998165214133
]

Do you have any idea why?

LaLa
  • 642
  • 2
  • 7
  • 11
  • It looks like this is a bug: http://stackoverflow.com/questions/37296064/find-number-of-detected-devices-in-tensorflow, https://github.com/tensorflow/tensorflow/issues/583 See if rebuilding the latest tensorflow version from source helps? – bunji Oct 06 '16 at 12:57
  • Cpu:0 is a device representing all cores on machine – Yaroslav Bulatov Oct 06 '16 at 16:36

1 Answers1

6

By default cpu:0 represents all cores available to the process. You can create devices cpu:0, cpu:1 which represent 1 logical core each by doing something like this

config = tf.ConfigProto(device_count={"CPU": 2},
                        inter_op_parallelism_threads=2,
                        intra_op_parallelism_threads=1)
sess = tf.Session(config=config)

Then you can assign to devices as

with tf.device("/cpu:0"):
  # ...

with tf.device("/cpu:1"):
  # ...
Yaroslav Bulatov
  • 57,332
  • 22
  • 139
  • 197
  • Thanks a lot! The assignment works. However, for everyone else: pay attention with print(device_lib.list_local_devices()) as it still only lists one CPU device. – LaLa Oct 09 '16 at 09:15
  • What do these values mean and why did you choose the values like that?: inter_op_parallelism_threads=2, intra_op_parallelism_threads=1 – LaLa Oct 09 '16 at 09:28
  • 1
    inter_op_parallelism_threads is the size of Eigen threadpool, intra_op_parallelism_threads controls how many ops can be launched in parallel – Yaroslav Bulatov Oct 09 '16 at 19:35
  • Today's version of https://www.tensorflow.org/guide/using_gpu seems to suggest the `with` construct can be used without the `device_count` etc being configured, unless I'm missing something from the site. In practice, however, it wakes all 4 GPUs before running the operation only on the designated one. – icedwater Nov 02 '18 at 07:52
  • 1
    How can I restrict TensorFlow to use only and only 1 core of a CPU? – fisakhan Aug 18 '20 at 08:43