2

I want to write an assertion that ensures that certain ops in my graph will be run on a specific device. How can I determine programmatically the device placement of an op so that I can write such an assertion?

sudo-nim
  • 408
  • 3
  • 14

1 Answers1

2

You can ensure that an operation is run on a specific device by using

with tf.device('/gpu:0'):

before the definition of the operation (see here for more information).

edit:

Every available gpu has its own index: '/gpu:0', '/gpu:1', '/gpu:2' etc. This way you can bind specific operations to specific gpus. When you import tensorflow, it prints out which gpus are available and what index they are assigned.

(For example it prints: Creating TensorFlow device (/gpu:0) -> (device: 0, name: GeForce GTX 1070, pci bus id: ...))

BlueSun
  • 3,541
  • 1
  • 18
  • 37
  • That will ensure that things aren't accidentally run on a cpu instead of a gpu. The problem is that it won't make sure you're using the intended gpu. For example, this wouldn't catch you training models on your display gpu. – sudo-nim Apr 02 '17 at 19:10
  • @sudo-nim In the given link, in the very first paragraph it shows you how to assign different gpus. I edited my answer to make it clearer. – BlueSun Apr 03 '17 at 11:39
  • That's helpful, the only problem is I'll still have to check the logs to see what the device mapping is. If I run my code on a different machine that has different device mappings or something changes with my nvidia setup, this wouldn't automatically catch it. – sudo-nim Apr 03 '17 at 15:18
  • @sudo-nim You can get the list of the gpus at run time using [this](http://stackoverflow.com/questions/38559755/how-to-get-current-available-gpus-in-tensorflow) – BlueSun Apr 03 '17 at 15:45