System Info: 1.1.0, GPU, Windows, Python 3.5, code runs in ipython consoles.
I am trying to run two different Tensorflow sessions, one on the GPU (that does some batch work) and one on the CPU that I use for quick tests while the other works.
The problem is that when I spawn the second session specifying with tf.device('/cpu:0')
the session tries to allocate GPU memory and crashes my other session.
My code:
import os
os.environ["CUDA_VISIBLE_DEVICES"] = ""
import time
import tensorflow as tf
with tf.device('/cpu:0'):
with tf.Session() as sess:
# Here 6 GBs of GPU RAM are allocated.
time.sleep(5)
How do I force Tensorflow to ignore the GPU?
UPDATE:
As suggested in a comment by @Nicolas, I took a look at this answer and ran
import os
os.environ["CUDA_VISIBLE_DEVICES"] = ""
import tensorflow as tf
from tensorflow.python.client import device_lib
print(device_lib.list_local_devices())
which prints:
[name: "/cpu:0"
device_type: "CPU"
memory_limit: 268435456
locality {
}
incarnation: 2215045474989189346
, name: "/gpu:0"
device_type: "GPU"
memory_limit: 6787871540
locality {
bus_id: 1
}
incarnation: 13663872143510826785
physical_device_desc: "device: 0, name: GeForce GTX 1080, pci bus id: 0000:02:00.0"
]
It seems to me that even if I explicitly tell the script to ignore any CUDA devices, it still finds and uses them. Could this be a bug of TF 1.1?