Working on google colab. Using tf.keras
and tensorflow version 2.3.0
I'm getting crazy because I can't use the model I've trained to run predictions with model.predict
because it runs out of CPU RAM. I've been able to reproduce the issue with a very minimal example.
import numpy as np
import tensorflow as tf
from tensorflow.keras import backend as K
from tensorflow.keras.layers import Input,Conv2D, Activation
matrixSide = 512 #define a big enough matrix to give memory issues
inputL = Input([matrixSide,matrixSide,12]) #create a toy model
l1 = Conv2D(32,3,activation='relu',padding='same') (inputL) #120
l1 = Conv2D(64,1,activation='relu',padding='same')(l1)
l1 = Conv2D(64,3,activation='relu',padding='same')(l1)
l1 = Conv2D(1,1,padding='same')(l1)
l1 = Activation('linear')(l1)
model = Model(inputs= inputL,outputs = l1)
#run predictions
inImm = np.zeros((64,matrixSide,matrixSide,12))
for i in range (60):
print(i)
outImm = model.predict(inImm)
# K.clear_session() #somebody suggested it...
Basically, when working on GPU, it uses 3.0 GB of CPU RAM in the first 4 iterations,then it goes up to 7, then to 10 then it crashes because it exhausted all the available RAM! When running on CPU it lasts for more iterations, sometimes it even decreases the amount of RAM it's using from 9 GB back to 3 GB but in the end it still crashes after 20 or so iterations.
This previous example ( Keras predict loop memory leak using tf.data.Dataset but not with a numpy array ) had similar issues when using tf.data
but not with numpy. Somebody suggested on github issues for tensorflow 1.14 to do a K.clear_session
in each loop... but it doesn't help!
Any idea on how to fix this?