-3

RuntimeError: CUDA out of memory. Tried to allocate 128.00 MiB (GPU 0; 14.76 GiB total capacity; 10.85 GiB already allocated; 27.75 MiB free; 11.31 GiB reserved in total by PyTorch) If reserved memory is >> allocated memory try setting max_split_size_mb to avoid fragmentation.

  • Does this answer your question? [How to fix this strange error: "RuntimeError: CUDA error: out of memory"](https://stackoverflow.com/questions/54374935/how-to-fix-this-strange-error-runtimeerror-cuda-error-out-of-memory) – Hagbard Sep 28 '22 at 07:05
  • You have to provide more context about your problem, code snippet, input, layer input and output dimensions, etc. in order for others to understand your problem. – Angus Sep 28 '22 at 08:17

1 Answers1

0

11.31 GiB reserved in total by PyTorch Did you run another python kernel? (Such as notebook). You may check it by typing nvidia-smi in the terminal.

Then if you want to kill the unused process, you can do it by typing kill -9 <pid number>. Using -9 means you force close it.

amisotcm
  • 111
  • 3
  • 11