I am having an issue with my Graphics card retaining memory after the execution of a CUDA script (even with the use of cudaFree()).
On boot the Total Used memory is about 128MB but after the script runs it runs out of memory mid execution.
nvidia-sma:
+------------------------------------------------------+
| NVIDIA-SMI 340.29 Driver Version: 340.29 |
|-------------------------------+----------------------+----------------------+
| GPU Name Persistence-M| Bus-Id Disp.A | Volatile Uncorr. ECC |
| Fan Temp Perf Pwr:Usage/Cap| Memory-Usage | GPU-Util Compute M. |
|===============================+======================+======================|
| 0 GeForce GTX 660 Ti Off | 0000:01:00.0 N/A | N/A |
| 10% 43C P0 N/A / N/A | 2031MiB / 2047MiB | N/A Default |
+-------------------------------+----------------------+----------------------+
+-----------------------------------------------------------------------------+
| Compute processes: GPU Memory |
| GPU PID Process name Usage |
|=============================================================================|
| 0 Not Supported |
+-----------------------------------------------------------------------------+
Is there any way to free this memory back up without rebooting, perhaps a terminal command?
Also is this normal behaviour if I am not managing my memory correctly in a CUDA script, or should this memory be automatically freeing itself when the script stops / is quit?