I am using this code (please excuse its messiness) to run on my CPU. I have a custom RL environment that I have created myself and I am using DQN agent.
But when I run this code on GPU, it doesn't utilize much of it and in fact it is slower than my CPU.
This is the output of nvidia-smi
. As you can see my processes are running on GPU but the speed is much slower than I would expect.
+-----------------------------------------------------------------------------+
| NVIDIA-SMI 440.82 Driver Version: 440.82 CUDA Version: 10.2 |
|-------------------------------+----------------------+----------------------+
| GPU Name Persistence-M| Bus-Id Disp.A | Volatile Uncorr. ECC |
| Fan Temp Perf Pwr:Usage/Cap| Memory-Usage | GPU-Util Compute M. |
|===============================+======================+======================|
| 0 TITAN Xp Off | 00000000:00:05.0 Off | N/A |
| 23% 37C P2 60W / 250W | 11619MiB / 12196MiB | 0% Default |
+-------------------------------+----------------------+----------------------+
| 1 TITAN Xp Off | 00000000:00:06.0 Off | N/A |
| 23% 29C P8 9W / 250W | 157MiB / 12196MiB | 0% Default |
+-------------------------------+----------------------+----------------------+
+-----------------------------------------------------------------------------+
| Processes: GPU Memory |
| GPU PID Type Process name Usage |
|=============================================================================|
| 0 25540 C python3 11609MiB |
| 1 25540 C python3 147MiB |
+-----------------------------------------------------------------------------+
Can anyone point out what can I do to change my code for GPU capabilities?
PS: Notice that I have two GPUs and my process is running on both of them. Even if I use any one of two GPUs, the issue is that my GPU is not utilized and the speed is comparatively slower than GPU so two GPUs is not the issue