0

I'm training a classification model where I'm passing callbacks of early stopping, best save model, best learning rate which is depended on validation loss and accuracy value calculated at each epoch end but when I try calculating the callbacks I am getting a warning and not able to perform any callbacks

My code is as below

train_datagen = ImageDataGenerator(rescale = 1/255.0,
                                   rotation_range=30,
                                   zoom_range=0.4,
                                   horizontal_flip=True,
                                   validation_split=0.2)

train_generator = train_datagen.flow_from_directory(Image_folder_path,
                                                    batch_size=batch_size,
                                                    class_mode='categorical',
                                                    target_size=(img_height, img_width),
                                                    subset='training')

validation_datagen = ImageDataGenerator(rescale = 1/255.0)

validation_generator = validation_datagen.flow_from_directory(FolderPath,
                                                              batch_size=batch_size,
                                                              class_mode='categorical',
                                                              target_size=(img_height, img_width),
                                                              subset='validation'
                                                             )

model.compile(optimizer='Adam',
              loss='categorical_crossentropy',
              metrics =['accuracy'])

EarlyStopping_callback = EarlyStopping(monitor='val_loss', patience=5, verbose=1, mode='auto')
best_model_file = snapshot_outp_dir+'\\weights.{epoch:02d}.h5'
best_model = ModelCheckpoint(best_model_file, monitor='val_accuracy', verbose = 1, save_best_only = True,save_weights_only=True)
history = model.fit_generator(train_generator,
                              epochs=30,
                              verbose=1,
                              validation_data=validation_generator,
                              #validation_freq=1,
                              callbacks = [LearningRateScheduler(step_decay,verbose=1),
                                  ReduceLROnPlateau(monitor='val_loss',factor=0.2,verbose=1,patience=1,min_lr=0.001),
                                  best_model,
                                  EarlyStopping_callback]
                              )

when I use this, I get warning as below:

WARNING:tensorflow:Learning rate reduction is conditioned on metric validation_loss' which is not available. Available metrics are: loss, accuracy,lr WARNING:tensorflow:Can save best model only with val_accuracy available, skipping. WARNING:tensorflow:Early stopping conditioned on metric 'validation_loss' which is not available. Available metrics are:loss,accuracy,lr

I looked at similar problems but nothing worked for me, few of the solutions are

  1. using large datasets as I'm using more than 2k samples this is not the issue also I tried with different split ratios for validation like .5 .6 still the same problem happened,

  2. changing validation_loss to val_loss and val_accuracy to validation_accuracy, this didn't work.

  3. Using validation_frequency , kept it to 1 again got the same warning.

  4. Not passing validation data , validation_genrator is passed clearly in model.fit_generator still got the same warning.

What am I doing wrong ?? Any suggestion to solve this will be very helpful.

xionxavier
  • 47
  • 7

1 Answers1

0

Use image data generator like this:

train_generator = train_datagen.flow_from_directory(Image_folder_path,
                                                    batch_size=64,
                                                    class_mode='categorical',
                                                    target_size=(img_height, img_width)
                                                   )

and this:

validation_generator = validation_datagen.flow_from_directory(FolderPath,
                                                              batch_size=64,
                                                              class_mode='categorical',
                                                              target_size=(img_height, img_width)
                                                             )

And remove validation_split=0.2 from train_datagen

Adarsh Wase
  • 1,727
  • 3
  • 12
  • 26
  • this works, can you explain me why removing subset parameter will work? – xionxavier Sep 06 '21 at 10:21
  • Not sure, but I think we use the subset parameter when we don't have an actual validation set. We use subset when we want to take some data from the training set and make it a validation set, and then using subset we will differentiate between training and validation data. Just like we do in sklearn's train-test-split. In your case, you already have a validation set stored in your directory, so there is no need to take it from training set and using subset parameter to differentiate it – Adarsh Wase Sep 06 '21 at 10:24
  • What I told you earlier, I am not sure about it, I suggest you to read TensorFlow documentation for detailed and true information. – Adarsh Wase Sep 06 '21 at 10:31