The error message is as follows:CUDA out of memory. Tried to allocate 26.00 MiB (GPU 0; 23.65 GiB total capacity; 21.91 GiB already allocated; 25.56 MiB free; 22.62 GiB reserved in total by PyTorch) If reserved memory is >> allocated memory try setting max split size mb to avoid fragmentation. See documentation for Memory Management and PYTORCH CUDA ALLOC CONF. I have already tried reducing batch sizes and optimizing my code, but the issue persists. I would like to know how to address this problem and prevent the out of memory error. Additionally, I am unsure how to set the "max split size mb" parameter mentioned in the error message
Any guidance or suggestions on resolving this issue would be greatly appreciated. Thank you in advance!
I have already tried reducing batch sizes and optimizing my code, but the issue persists . Additionally, I am unsure how to set the "max split size mb" parameter mentioned in the error message.