-2

I have used numba and tensorflow in python to calculate a 3-d loop problem.

Firstly I used numba several times to prepare the input data, and everything was fine. Then I used tensorflow to de deep learning, and everything was fine.

Next, when I went back to do the above procedure for the second time, i.e. to use numba, it gave me an error CUDA_ERROR_OUT_OF_MEMORY.

If I killed the thread and restarted the kernel, I could run it but still failed at the second run.

It seems that the problem is in tensorflow, but I don't know. I have added numba.cuda.defer_cleanup() at the beginning and ending of the code but it didn't help much.

Does anyone know how to solve this issue?

ZHANG Juenjie
  • 501
  • 5
  • 20
  • 1
    tensorflow by default reserves all GPU memory for its own use. You can modify this behavior, see [here](https://stackoverflow.com/questions/34199233/how-to-prevent-tensorflow-from-allocating-the-totality-of-a-gpu-memory) – Robert Crovella Mar 05 '18 at 04:25

1 Answers1

1

You might try adjusting the fraction of visible memory that TF tries to take in its initial allocation. For example: Assume that you have 12GB of GPU memory and want to allocate ~4GB:

gpu_options = tf.GPUOptions(per_process_gpu_memory_fraction=0.333)
sess = tf.Session(config=tf.ConfigProto(gpu_options=gpu_options))

or

config = tf.ConfigProto()
config.gpu_options.allow_growth = True 
sess = tf.Session(config=config,graph=detection_graph)

In the second method, you will tell the TF to allow the GPU growth. These options should help you to get out of your issue

saikishor
  • 878
  • 3
  • 11
  • 21