0

I am using GPU to run some very large deep learning models on a dataset of size 55GB. If I use a batch size greater than 1, I get a resource exhausted error. Even with a batch size of 1, I get segmentation faults.

GPU memory is 10GB and Server has 32GB of RAM.

Is there a way that I could know how large the data (batch size of 1) will occupy in the GPU? I am using tf.Keras to fit models. Is there a torchsummary equivalent in TensorFlow?

Dushi Fdz
  • 161
  • 3
  • 22
  • Sometimes these Deep Learning/GPU errors can be quite opaque. It would be more helpful to show the code/error in case there's something else going on? Maybe you're not setting the batch size correctly or maybe there's another error? – TC Arlen Aug 10 '21 at 21:43
  • Does this answer your question? [How to calculate optimal batch size](https://stackoverflow.com/questions/46654424/how-to-calculate-optimal-batch-size) – Innat Aug 11 '21 at 12:27

1 Answers1

0

In keras (now a part of Tensorflow) you can use model.summary() natively. See here for more info.

TC Arlen
  • 1,442
  • 2
  • 11
  • 19