0

I have two NVIDIA devices in my docker. Here is the GPU usage information when two models are running at the same time.

+-----------------------------------------------------------------------------+
| Processes:                                                       GPU Memory |
|  GPU       PID   Type   Process name                             Usage      |
|=============================================================================|
+-----------------------------------------------------------------------------+
Mon May 31 10:51:54 2021       
+-----------------------------------------------------------------------------+
| NVIDIA-SMI 418.43       Driver Version: 418.43       CUDA Version: 10.1     |
|-------------------------------+----------------------+----------------------+
| GPU  Name        Persistence-M| Bus-Id        Disp.A | Volatile Uncorr. ECC |
| Fan  Temp  Perf  Pwr:Usage/Cap|         Memory-Usage | GPU-Util  Compute M. |
|===============================+======================+======================|
|   0  GeForce GTX 108...  Off  | 00000000:03:00.0 Off |                  N/A |
| 37%   64C    P2    81W / 250W |  10909MiB / 11178MiB |      0%      Default |
+-------------------------------+----------------------+----------------------+
|   1  GeForce GTX 108...  Off  | 00000000:04:00.0 Off |                  N/A |
| 28%   50C    P8     9W / 250W |    147MiB / 11178MiB |      0%      Default |
+-------------------------------+----------------------+----------------------+
                                                                               
+-----------------------------------------------------------------------------+
| Processes:                                                       GPU Memory |
|  GPU       PID   Type   Process name                             Usage      |
|=============================================================================|
+-----------------------------------------------------------------------------+

It is obviously two models are using device 0. Here I have two questions:

  1. How can I know which device is used?
  2. If one model uses device 0, can we automatically let another model uses device 1?
haojie
  • 593
  • 1
  • 7
  • 19

1 Answers1

0

I would say Python itself is not using a GPU per se, but rather a framework like Torch... are you using a machine learning framework here?

RSchauer
  • 1
  • 1
  • I am using tensorflow. – haojie May 31 '21 at 03:56
  • 2
    Could be related to this question then: https://stackoverflow.com/questions/38009682/how-to-tell-if-tensorflow-is-using-gpu-acceleration-from-inside-python-shell If any GPU device is available to TensorFlow, it should pop up there – RSchauer May 31 '21 at 04:00
  • As for how to specify the device to be used, this here could help: https://stackoverflow.com/questions/40069883/how-to-set-specific-gpu-in-tensorflow – RSchauer May 31 '21 at 04:03