I am using GPU to run some very large deep learning models on a dataset of size 55GB. If I use a batch size greater than 1, I get a resource exhausted error. Even with a batch size of 1, I get segmentation faults.
GPU memory is 10GB and Server has 32GB of RAM.
Is there a way that I could know how large the data (batch size of 1) will occupy in the GPU? I am using tf.Keras to fit models. Is there a torchsummary
equivalent in TensorFlow?