So I am running PyTorch deep learning job using GPU but the job is pretty light.
My GPU has 8 GB but the job only uses 2 GB. Also GPU-Util is close to 0%.
|-------------------------------+----------------------+----------------------+
| GPU Name Persistence-M| Bus-Id Disp.A | Volatile Uncorr. ECC |
| Fan Temp Perf Pwr:Usage/Cap| Memory-Usage | GPU-Util Compute M. |
|===============================+======================+======================|
| 0 GeForce GTX 1080 Off | 00000000:01:00.0 On | N/A |
| 0% 36C P2 45W / 210W | 1155MiB / 8116MiB | 0% Default |
+-------------------------------+----------------------+----------------------+
based on GPU-Util and memory I might be able to fit in another 3 jobs.
However, I am not sure if that will affect the overall runtime.
If I run multiple jobs on same GPU does that affects the overall runtime?
I think tried once and I think there was delay.