0

I am making a model in TensorFlow, I detected that there was overfitting and after modifying the parameters of the model, it seems that there is no longer overfitting but I am not sure. I show you the two graphs that I get after the Tensorboard training.

epoch_loss:

epoch_loss

epoch_accuracy:

epoch_accuracy

After changing the Smoothing parameter on the side of TensorFlow it shows me the accuracy graph:

Smoothing

epoch_accuracy_Smoothing

I have two questions I want to ask you.

  1. What is the Smoothing parameter for?
  2. Do you see the model's behavior during training?

Thanks to all of you.

huy
  • 1,648
  • 3
  • 14
  • 40

2 Answers2

0

smoothing works like a moving average. If you increase its size, it will take more values in the near area and it will give this smoothed aspect. To better understand the mathematic behind, check this question.

Your model seems almost perfect (slightly underfit), I recommend you to continue the training for about 500-1000 additionnal epochs.

B Douchet
  • 970
  • 1
  • 9
  • 20
  • 1
    Thanks for your comments, I trained the model to 3000 epochs and I saw that approximately in 2500 the value of the loss of validation data exceeded those of the training, so I decided to train the model with 2400 epochs. – ChusmaSelecta Dec 16 '20 at 08:51
0

smoothing works as described in the other answers. Your model seems to be working correctly and is continuing to have decreasing training and validation loss TRENDS. I might suggest adding some BatchNormalization layers to your model to get smoother convergence. Documentation is here. I also recommend adding the callbacks as shown below

lra=tf.keras.callbacks.ReduceLROnPlateau(
    monitor="val_loss",
    factor=0.5,
    patience=1,
    verbose=1,
    mode="auto",
    min_delta=0.0001,
    cooldown=0,
    min_lr=0)

estop=tf.keras.callbacks.EarlyStopping(
    monitor="val_loss",
    min_delta=0,
    patience=4,
    verbose=1,
    mode="auto",
    baseline=None,
    restore_best_weights=True)
# in model.fit add
callbacks=[lra, estop]
Gerry P
  • 7,662
  • 3
  • 10
  • 20
  • Thanks for the contribution, I have a question. I have applied the two callbacks you have put me and when training the model, having configured 2400 epochs, only complete 30 epochs, What can be due? If so, this configuration does not allow me to train the model well. – ChusmaSelecta Dec 16 '20 at 08:50
  • probably stopped by the Early stop callback. It will stop training if the validation loss fails to decrease after patience parameter number of epochs. This indicates your model is starting to over fit. If you want to train further increase the value of the patience parameter. – Gerry P Dec 16 '20 at 16:03