0

I have a Keras model, and data that I load into a Pandas dataframe. For the purposes of testing and debugging, my data set is rather modest. In fact, I can process and load the entire data set into memory and train the model without issue.

Later I'm going to transition to training the model on a much larger data set, so I wrote the following generator:

augment_image = keras.preprocessing.image.ImageDataGenerator(

    rotation_range = 20,
    zoom_range = 0.1,
    width_shift_range = 0.1,
    height_shift_range = 0.1,
    shear_range = 0.1,
    horizontal_flip = True,
    vertical_flip = True,
    fill_mode = "nearest"

)
def data_generator(df, batch_size, augment = None):

    df = df.sample(frac = 1).reset_index(drop = True) # Shuffle
    num_rows = len(df.index)
    batch_num = 0

    while True:

        index_low  = min(batch_num * batch_size, num_rows - 1)
        index_high = min(index_low + batch_size, num_rows)
        batch_num += 1
        if batch_num * batch_size > num_rows - 1:
            batch_num = 0

        subframe = df.iloc[index_low:index_high]
        images = load_images(subframe, path, image_type)
        if augment is not None:
            images = augment.flow(np.array(images))
        targets = subframe["target"]

        yield (np.array(images), np.array(targets))

The function "load_images" simply takes a list of filenames (i.e. subframe["image_type"]) and loads the actual images associated with those filenames. I know it (and everything not unique to the generator) works because, as I mentioned before, I can train the model when I simply load the entire data set into memory. (That is, process and load the entire data set -- actual images and all -- into a single variable, and pass that to model.fit.)

But when I try to use the above generator to pass data to model.fit, like so ...

history = model.fit(

    data_generator(train_set, batch_size=32, augment=augment_image),
    verbose = 2,
    epochs = 1,
    steps_per_epoch = len(train_set.index) // 32,
    validation_data = data_generator(test_set, batch_size=32),
    validation_steps = len(test_set.index) // 32,
    callbacks = [checkpoint, early_stopping, tensorboard]

)

... it hangs for about 15 minutes before finally exiting with what seems like an absurd MemoryError:

Traceback (most recent call last):
  File "datagen_test.py", line 193, in <module>
    callbacks = [checkpoint, early_stopping, tensorboard]
  File "C:\Users\Username\Anaconda3\envs\tensorflow2\lib\site-packages\tensorflow_core\python\keras\engine\training.py", line 819, in fit
    use_multiprocessing=use_multiprocessing)
  File "C:\Users\Username\Anaconda3\envs\tensorflow2\lib\site-packages\tensorflow_core\python\keras\engine\training_v2.py", line 235, in fit
    use_multiprocessing=use_multiprocessing)
  File "C:\Users\Username\Anaconda3\envs\tensorflow2\lib\site-packages\tensorflow_core\python\keras\engine\training_v2.py", line 593, in _process_training_inputs
    use_multiprocessing=use_multiprocessing)
  File "C:\Users\Username\Anaconda3\envs\tensorflow2\lib\site-packages\tensorflow_core\python\keras\engine\training_v2.py", line 706, in _process_inputs
    use_multiprocessing=use_multiprocessing)
  File "C:\Users\Username\Anaconda3\envs\tensorflow2\lib\site-packages\tensorflow_core\python\keras\engine\data_adapter.py", line 747, in __init__
    peek, x = self._peek_and_restore(x)
  File "C:\Users\Username\Anaconda3\envs\tensorflow2\lib\site-packages\tensorflow_core\python\keras\engine\data_adapter.py", line 850, in _peek_and_restore
    peek = next(x)
  File "datagen_test.py", line 84, in data_generator
    yield (np.array(images), np.array(targets))
  File "C:\Users\Username\Anaconda3\envs\tensorflow2\lib\site-packages\keras_preprocessing\image\iterator.py", line 104, in __next__
    return self.next(*args, **kwargs)
  File "C:\Users\Username\Anaconda3\envs\tensorflow2\lib\site-packages\keras_preprocessing\image\iterator.py", line 116, in next
    return self._get_batches_of_transformed_samples(index_array)
  File "C:\Users\Username\Anaconda3\envs\tensorflow2\lib\site-packages\keras_preprocessing\image\numpy_array_iterator.py", line 148, in _get_batches_of_transformed_samples
    dtype=self.dtype)
MemoryError: Unable to allocate 18.4 MiB for an array with shape (32, 224, 224, 3) and data type float32

That seems like a rather small amount of memory to be unable to allocate. (And again, I have enough memory to simply load the entire data set and train the model that way.) What am I doing wrong?

Nickolas
  • 111
  • 1
  • 4
  • Did you check how many training and validation steps during `fit`( `steps_per_epoch`, and `validation_steps`). Sometimes data generators runs forever if didn't set the options correctly. – Vishnuvardhan Janapati Apr 27 '20 at 04:56
  • @VishnuvardhanJanapati `steps_per_epoch` and `validation_steps` are set as shown in the code in the post. – Nickolas Apr 27 '20 at 06:29

0 Answers0