10

I am trying to train 1000x of Sequential models in a loop. In every loop my program leaks memory until I run out and get an OOM exception.

I already asked a similar question before (Training multiple Sequential models in a row slows down)

and have seen others in similar problems (Keras: Out of memory when doing hyper parameter grid search)

and the solution is always to add K.clear_session() to your code after you have finished using the model. So I did that in my previous question and I am still leaking memory

Here is code to reproduce the issue.

import random
import time
from keras.models import Sequential
from keras.layers import Dense
from keras import backend as K
import tracemalloc


def run():
    tracemalloc.start()
    num_input_nodes = 12
    num_hidden_nodes = 8
    num_output_nodes = 1

    random_numbers = random.sample(range(1000), 50)
    train_x, train_y = create_training_dataset(random_numbers, num_input_nodes)

    for i in range(100):
        snapshot = tracemalloc.take_snapshot()
        for j in range(10):
            start_time = time.time()
            nn = Sequential()
            nn.add(Dense(num_hidden_nodes, input_dim=num_input_nodes, activation='relu'))
            nn.add(Dense(num_output_nodes))
            nn.compile(loss='mean_squared_error', optimizer='adam')
            nn.fit(train_x, train_y, nb_epoch=300, batch_size=2, verbose=0)
            K.clear_session()
            print("Iteration {iter}. Current time {t}. Took {elapsed} seconds".
                  format(iter=i*10 + j + 1, t=time.strftime('%H:%M:%S'), elapsed=int(time.time() - start_time)))

        top_stats = tracemalloc.take_snapshot().compare_to(snapshot, 'lineno')

        print("[ Top 5 differences ]")
        for stat in top_stats[:5]:
            print(stat)


def create_training_dataset(dataset, input_nodes):
    """
    Outputs a training dataset (train_x, train_y) as numpy arrays.
    Each item in train_x has 'input_nodes' number of items while train_y items are of size 1
    :param dataset: list of ints
    :param input_nodes:
    :return: (numpy array, numpy array), train_x, train_y
    """
    data_x, data_y = [], []
    for i in range(len(dataset) - input_nodes - 1):
        a = dataset[i:(i + input_nodes)]
        data_x.append(a)
        data_y.append(dataset[i + input_nodes])
    return numpy.array(data_x), numpy.array(data_y)

run()

Here is the output I get from the first memory debug print

/tensorflow/python/framework/ops.py:121: size=3485 KiB (+3485 KiB), count=42343 (+42343) /tensorflow/python/framework/ops.py:1400: size=998 KiB (+998 KiB), count=8413 (+8413) /tensorflow/python/framework/ops.py:116: size=888 KiB (+888 KiB), count=32468 (+32468) /tensorflow/python/framework/ops.py:1185: size=795 KiB (+795 KiB), count=3179 (+3179) /tensorflow/python/framework/ops.py:2354: size=599 KiB (+599 KiB), count=5886 (+5886)

System info:

  • python 3.5
  • keras (1.2.2)
  • tensorflow(1.0.0)
Community
  • 1
  • 1
G_E
  • 189
  • 1
  • 10
  • 2
    Can you try adding a `tf.reset_default_graph()` (and `import tensorflow as tf` at the top) after `K.clear_session()`? – mrry Mar 20 '17 at 16:06
  • Works like a charm. Thanks! – G_E Mar 20 '17 at 22:07
  • Trying to run keras models in a loop. Getting a `ValueError: Error when checking target: expected dense_1 to have 2 dimensions, but got array with shape (2000, 10, 10)` which is due to the loop. I have tried `K.clear_session() ` followed by `tf.reset_default_graph()` at the end of the loop but does not help. Any idea? – Anakin Sep 20 '18 at 10:07

1 Answers1

13

The memory leak stems from Keras and TensorFlow using a single "default graph" to store the network structure, which increases in size with each iteration of the inner for loop.

Calling K.clear_session() frees some of the (backend) state associated with the default graph between iterations, but an additional call to tf.reset_default_graph() is needed to clear the Python state.

Note that there might be a more efficient solution: since nn does not depend on either of the loop variables, you can define it outside the loop, and reuse the same instance inside the loop. If you do that, there is no need to clear the session or reset the default graph, and performance should increase because you benefit from caching between iterations.

Graham
  • 7,431
  • 18
  • 59
  • 84
mrry
  • 125,488
  • 26
  • 399
  • 400
  • 2
    Not exactly. `K.clear_session()` already calls `tf.reset_default_graph()` inside. Then you will just duplicate it, don't you? – Dilshat May 17 '19 at 12:23
  • Dilshat is right, line 95 in the [tensorflow_backend.py](https://github.com/keras-team/keras/blob/master/keras/backend/tensorflow_backend.py) shows this. So no need for that second call. – Markus Jun 21 '19 at 22:12