0

pytorch keeps outputting the error message below despite me tuning the batch size to be equal to 8. I tried using torch.cuda.empty_cache(), but that didn't solve the issue. I've also tried proposed solutions here: How to clear CUDA memory in PyTorch and pytorch out of GPU memory, but they didn't work.

Traceback (most recent call last):
  File "D:\Programming\MachineLearning\Projects\diffusion_models\practice\ddpm.py", line 110, in <module>
    launch()
  File "D:\Programming\MachineLearning\Projects\diffusion_models\practice\ddpm.py", line 106, in launch
    train(args)
  File "D:\Programming\MachineLearning\Projects\diffusion_models\practice\ddpm.py", line 85, in train
    loss.backward()
  File "D:\Programming\global_venv\lib\site-packages\torch\_tensor.py", line 255, in backward
    torch.autograd.backward(self, gradient, retain_graph, create_graph, inputs=inputs)
  File "D:\Programming\global_venv\lib\site-packages\torch\autograd\__init__.py", line 147, in backward
    Variable._execution_engine.run_backward(
RuntimeError: CUDA out of memory. Tried to allocate 1024.00 MiB (GPU 0; 4.00 GiB total capacity; 2.63 GiB already allocated; 0 bytes free; 2.73 GiB reserved in total by PyTorch)

0 Answers0