1

I want to find out how much GPU memory my Tensorflow model needs at inference. So I used tf.contrib.memory_stats.MaxBytesInUse which returned 6168 MB.

But with config.gpu_options.per_process_gpu_memory_fraction I can use a way smaller fraction of my GPU and the model still runs fine without needing more time for one inference step.

Is there a way to determine how much GPU memory a Tensorflow model requires? I could just decrease the GPU memory fraction until TF crashes, but I guess there is a more elegant and precise way?

Maracana
  • 211
  • 2
  • 6
  • Is your model a tf.keras model? – Djib2011 Jul 10 '19 at 08:16
  • No. Is there a way to do it in keras? – Maracana Jul 10 '19 at 08:28
  • Yes, I use the function suggested from [this post](https://stackoverflow.com/questions/43137288/how-to-determine-needed-memory-of-keras-model). – Djib2011 Jul 10 '19 at 08:29
  • Thank you. It looks like this is just a calculation based on the shapes in the model. I could adapt that for TF. Have you ever tried to run your keras model with less GPU memory? – Maracana Jul 10 '19 at 08:54
  • You mean it requires e.g. 10GB and I have only 8GB in my GPU? Yes, it raises a MemoryError (it can't swap memory with RAM if that's what you're asking). – Djib2011 Jul 10 '19 at 08:57
  • My point was: if you calculated with the function of the post that your model needs 10GB, it crashes for 9.9GB. So the intuitive calculation is correct and there is no further optimization done by TF/Keras. – Maracana Jul 10 '19 at 09:04
  • Oh, I don't know if its that precise, meaning that keras/tensorflow introduces memory overheads not attributed to the model. I do it to get a more a feel for how much memory it takes. E.g. I tried once to use a large 3D CNN which crashed. Then I ran the script and it showed I needed 80GB of memory for a batch size of 1... I knew that I wasn't even close and that there was nothing I could do about it so I threw away the model and tried a much much smaller one. – Djib2011 Jul 10 '19 at 10:03

0 Answers0