If (in C++ + CUDA) cudaMallocManaged() is used to allocate a shared array in host and GPU memory, and the program encounters (say in Host code) an exit(1)
, does this leave dangling memory in the GPU permanently?
I am going to guess the answer is NO based on Will exit() or an exception prevent an end-of-scope destructor from being called? but I am not sure whether the GPU has some kind of reclaiming mechanism.