0

I am trying to limit the amount of GPU reserved by PyTorch. Yolo8 offers workspace parameter that limits the PyTorch reserved memory. How can I do it in general (not limited to Yolo8)? I've tried to add a system variable CUDNN_CONV_WSCAP_DBG 2048 (additional -> system variables), but I still get

CUDA out of memory. Tried to allocate 2.52 GiB (GPU 0; 6.00 GiB total
capacity; 4.59 GiB already allocated; 0 bytes free; 4.61 GiB reserved
in total by PyTorch)
Robert Crovella
  • 143,785
  • 11
  • 213
  • 257
sixtytrees
  • 1,156
  • 1
  • 10
  • 25
  • Does [this](https://stackoverflow.com/q/49529372/1593077) answer your question? – einpoklum Aug 25 '23 at 19:55
  • Does this answer your question? [Force GPU memory limit in PyTorch](https://stackoverflow.com/questions/49529372/force-gpu-memory-limit-in-pytorch) – Mike Doe Aug 25 '23 at 19:56

0 Answers0