I know there have been multiple different posts regarding this matter, but it seems every time a new keras / tensorflow version is released, there's conflict with package versions and inevtibly the version I have doesn't use my GPU.
I have a single conda environment where I have installed all my packages (there is nothing in the base
environment to avoid package conflict)
Packages:
- keras 2.3.1
- tensorflow-gpu 1.14.0
and no other tensorflow / keras versions (although I do have related packages like keras-applications
and tensorboard
).
When I try the following, I get conflicting results
# picks up the GPU it seems
from tensorflow.python.client import device_lib
print(device_lib.list_local_devices())
>>> [name: "/device:CPU:0"
device_type: "CPU"
memory_limit: 268435456
locality {
}
incarnation: 4346857393168915334
, name: "/device:XLA_CPU:0"
device_type: "XLA_CPU"
memory_limit: 17179869184
locality {
}
incarnation: 15716071101553989809
physical_device_desc: "device: XLA_CPU device"
, name: "/device:XLA_GPU:0"
device_type: "XLA_GPU"
memory_limit: 17179869184
locality {
}
incarnation: 6257014534476475142
physical_device_desc: "device: XLA_GPU device"
]
# but then Keras doesn't pick up the GPU?
from keras import backend as K
K.tensorflow_backend._get_available_gpus()
>>> []
So tensorflow seems to pick up the GPU, but then Keras doesn't? This doesn't make sense, as Keras is using Tensorflow backend.
Am I doing something wrong here? Why is TF picking up the GPU but Keras isn't? Will Keras still use the GPU even though the _get_available_gpus()
method returns an empty list?