0

I would like to know what is loaded in VRAM when creating a new model. Even in simple cases like:

import tensorflow as tf

model = tf.keras.Sequential([tf.keras.layers.Dense(1, input_shape=(1,))])

memory usage increases from 0 to 4000+ MiB the moment I run the model definition command. Only thing that comes to my mind is that various CUDA libraries are loaded in memory, but I am not sure about that. Any thoughts?

Thank you in advance!

BillTheKid
  • 377
  • 1
  • 13

1 Answers1

1

Tensorflow has a habit of taking as much space as it can incase it needs it later on. You can try the following to allow dynamic memory growth which should fix your issue.

physical_devices = tf.config.experimental.list_physical_devices('GPU')
for i in physical_devices:
    tf.config.experimental.set_memory_growth(i, True)
Sean
  • 524
  • 2
  • 7