0

I have a Python training script that makes use of CUDA GPU to train the model (Kohya Trainer script available here). It encounters out-of-memory error:

OutOfMemoryError: CUDA out of memory. Tried to allocate 2.00 MiB (GPU 1; 23.65 
GiB total capacity; 144.75 MiB already allocated; 2.81 MiB free; 146.00 MiB 
reserved in total by PyTorch) If reserved memory is >> allocated memory try 
setting max_split_size_mb to avoid fragmentation.  See documentation for Memory 
Management and PYTORCH_CUDA_ALLOC_CONF

After investigation, I found out that the script is using GPU unit 1, instead of unit 0. Unit 1 is currently in high usage, not much GPU memory left, while GPU unit 0 still has adequate resources. How do I specify the script to use GPU unit 0?

Even I change from:

text_encoder.to("cuda")

to:

text_encoder.to("cuda:0")

The script is still using GPU unit 1, as specified in the error message.

Output of nvidia-smi:

+-----------------------------------------------------------------------------+
| NVIDIA-SMI 525.60.11    Driver Version: 525.60.11    CUDA Version: 12.0     |
|-------------------------------+----------------------+----------------------+
| GPU  Name        Persistence-M| Bus-Id        Disp.A | Volatile Uncorr. ECC |
| Fan  Temp  Perf  Pwr:Usage/Cap|         Memory-Usage | GPU-Util  Compute M. |
|                               |                      |               MIG M. |
|===============================+======================+======================|
|   0  NVIDIA GeForce ...  Off  | 00000000:81:00.0 Off |                  Off |
| 66%   75C    P2   437W / 450W |   5712MiB / 24564MiB |    100%      Default |
|                               |                      |                  N/A |
+-------------------------------+----------------------+----------------------+
|   1  NVIDIA GeForce ...  Off  | 00000000:C1:00.0 Off |                  Off |
| 32%   57C    P2   377W / 450W |  23408MiB / 24564MiB |    100%      Default |
|                               |                      |                  N/A |
+-------------------------------+----------------------+----------------------+
                                                                               
+-----------------------------------------------------------------------------+
| Processes:                                                                  |
|  GPU   GI   CI        PID   Type   Process name                  GPU Memory |
|        ID   ID                                                   Usage      |
|=============================================================================|
|    0   N/A  N/A      1947      G   /usr/lib/xorg/Xorg                  4MiB |
|    0   N/A  N/A     30654      C   python                           5704MiB |
|    1   N/A  N/A      1947      G   /usr/lib/xorg/Xorg                  4MiB |
|    1   N/A  N/A     14891      C   python                          23400MiB |
+-----------------------------------------------------------------------------+

UPDATE 1

The same notebook can see 2 GPU units:

import torch
for i in range(torch.cuda.device_count()):
    print(torch.cuda.get_device_properties(i))

which outputs:

_CudaDeviceProperties(name='NVIDIA GeForce RTX 4090', major=8, minor=9, total_memory=24217MB, multi_processor_count=128)
_CudaDeviceProperties(name='NVIDIA GeForce RTX 4090', major=8, minor=9, total_memory=24217MB, multi_processor_count=128)

UPDATE 2

Setting CUDA_VISIBLE_DEVICES=0 results this error:

RuntimeError: CUDA error: invalid device ordinal

Raptor
  • 53,206
  • 45
  • 230
  • 366
  • 1
    Does https://stackoverflow.com/questions/39649102/how-do-i-select-which-gpu-to-run-a-job-on help? – AKX Mar 22 '23 at 11:02
  • After I specify `CUDA_VISIBLE_DEVICES=0` or `CUDA_DEVICE_ORDER=PCI_BUS_ID CUDA_VISIBLE_DEVICES=0`, it both reports `RuntimeError: CUDA error: invalid device ordinal` – Raptor Mar 23 '23 at 01:19
  • Same error appears when I set `CUDA_VISIBLE_DEVICES=1`, which is quite weird – Raptor Mar 23 '23 at 01:25

0 Answers0