I am trying to limit the amount of GPU reserved by PyTorch. Yolo8 offers workspace
parameter that limits the PyTorch reserved memory. How can I do it in general (not limited to Yolo8)?
I've tried to add a system variable CUDNN_CONV_WSCAP_DBG 2048
(additional -> system variables), but I still get
CUDA out of memory. Tried to allocate 2.52 GiB (GPU 0; 6.00 GiB total
capacity; 4.59 GiB already allocated; 0 bytes free; 4.61 GiB reserved
in total by PyTorch)