My CUDA program crashed during execution, before memory was flushed. As a result, device memory remained occupied.
I'm running on a GTX 580, for which nvidia-smi --gpu-reset
is not supported.
Placing cudaDeviceReset()
in the beginning of the program is only affecting the current context created by the process and doesn't flush the memory allocated before it.
I'm accessing a Fedora server with that GPU remotely, so physical reset is quite complicated.
So, the question is - Is there any way to flush the device memory in this situation?