My laptop is Thinkpad T470P which seems to have dual GPUs -- one is Integrated Intel HD graphic 630, and the other one is GeForce 940MX.
I installed CUDA ver. 10.1 on this machine successfully, and now I want to run a training in Tensorflow. I want to know which GPU the training is using, so I tried this:
from tensorflow.python.client import device_lib
device_lib.list_local_devices()
and this is what I got:
[name: "/device:CPU:0"
device_type: "CPU"
memory_limit: 268435456
locality {
}
incarnation: 17770480900406893487, name: "/device:GPU:0"
device_type: "GPU"
memory_limit: 1462163865
locality {
bus_id: 1
links {
}
}
incarnation: 5306128727345722238
physical_device_desc: "device: 0, name: GeForce 940MX, pci bus id: 0000:02:00.0, compute capability: 5.0"]
I am just curious why there are two incarnation? one has name /device:GPU:0
and the other one has name GeForce 940MX
.
From my very limited knowledge, is it true that CUDA and tensorflow could only run on the GeForce one, because CUDA doesn't even support the integrated GPU?
In this case, how do I specify the tensorflow to run on the GeForce 940MX one? Since there are two names, I am not sure whether they are referring to different GPUs. Many thanks for your input!