1

I am trying to build a graph which shows accuracy and loss curves using Matplotlib but it is not displaying curves rather it just displays graph and its x-axis starts from minus value why not 0 .

Code:

model.compile(loss='categorical_crossentropy',optimizer='adam',metrics=['accuracy'])

max_accuracy=0.70
for i in range(10):
    print("Epoch no",i+1)
    history = model.fit(train_data,train_labels, epochs=1, batch_size=32,verbose=1,validation_data=(test_data,test_labels))
    if history.history['val_accuracy'][0]>max_accuracy:
        print("New Best model found above")
        max_accuracy=history.history['val_accuracy'][0]
        model.save('CNN-logo.h5')
        
model=tf.keras.models.load_model('CNN-logo.h5')
[train_loss, train_accuracy] = model.evaluate(train_data, train_labels)
print("Evaluation result on Train Data : Loss = {}, accuracy = {}".format(train_loss, train_accuracy))
[test_loss, test_acc] = model.evaluate(test_data, test_labels)
print("Evaluation result on Test Data : Loss = {}, accuracy = {}".format(test_loss, test_acc))
#Plot the loss curves
plt.figure(figsize=[8,6])
plt.plot(history.history['loss'],'r',linewidth=3.0)
plt.plot(history.history['val_loss'],'b',linewidth=3.0)
plt.legend(['Training loss', 'Validation Loss'],fontsize=18)
plt.xlabel('Epochs ',fontsize=16)
plt.ylabel('Loss',fontsize=16)
plt.title('Loss Curves',fontsize=1)
plt.show()
#Plot the Accuracy Curves
plt.figure(figsize=[8,6]) 
plt.plot(history.history['accuracy'],'r',linewidth=3.0) 
plt.plot(history.history['val_accuracy'],'b',linewidth=3.0)
plt.legend(['Training Accuracy', 'Validation Accuracy'],fontsize=18)
plt.xlabel('Epochs ',fontsize=16) 
plt.ylabel('Accuracy',fontsize=16)
plt.title('Accuracy Curves',fontsize=16)
plt.show()

Graph
Only displaying graph not curves

Martin Brisiak
  • 3,872
  • 12
  • 37
  • 51
Hamza
  • 530
  • 5
  • 27

2 Answers2

1

You are running the model for 1 epoch, so it has history for only one epoch. To save best model you can use callbacks.

checkpoint_callback = tf.keras.callbacks.ModelCheckpoint(filepath='CNN-logo.h5',
                      monitor='val_accuracy', mode='max', save_best_only=True)
earlystopping_callback = tf.keras.callbacks.EarlyStopping(monitor='loss', patience=3)

history = model.fit(x, y, epochs=50, batch_size=32, verbose=1, validation_split=0.2,
callbacks=[checkpoint_callback, earlystopping_callback])

model=tf.keras.models.load_model('CNN-logo.h5')
[train_loss, train_accuracy] = model.evaluate(x, y)
print("Evaluation result on Train Data : Loss = {}, accuracy = {}".format(train_loss, train_accuracy))
[test_loss, test_acc] = model.evaluate(x, y)
print("Evaluation result on Test Data : Loss = {}, accuracy = {}".format(test_loss, test_acc))

#Plot the loss curves
plt.figure(figsize=[8,6])
plt.plot(history.history['loss'],'r',linewidth=3.0)
plt.plot(history.history['val_loss'],'b',linewidth=3.0)
plt.legend(['Training loss', 'Validation Loss'],fontsize=18)
plt.xlabel('Epochs ',fontsize=16)
plt.ylabel('Loss',fontsize=16)
plt.title('Loss Curves',fontsize=1)
plt.show()

#Plot the Accuracy Curves
plt.figure(figsize=[8,6]) 
plt.plot(history.history['accuracy'],'r',linewidth=3.0) 
plt.plot(history.history['val_accuracy'],'b',linewidth=3.0)
plt.legend(['Training Accuracy', 'Validation Accuracy'],fontsize=18)
plt.xlabel('Epochs ',fontsize=16) 
plt.ylabel('Accuracy',fontsize=16)
plt.title('Accuracy Curves',fontsize=16)
plt.show()

Output:

enter image description here enter image description here

Mohil Patel
  • 437
  • 3
  • 9
  • Well , thanks Mohil , But you can see above that I am using a loop under epochs and loop goes to 10th iteration. Can you elaborate that what is save_best_only and monitor=val_loss doing here ? – Hamza Jan 20 '21 at 15:38
  • `save_best_only` saves the model with best performance yet; `monitor` is what should be taken into account for monitoring the performance of model. Detailed doc can be found [here](https://keras.io/api/callbacks/model_checkpoint/). – Mohil Patel Jan 20 '21 at 15:44
  • Can You describe that on how much epoch should I stop training ?. To my extent , I think I have to stop training till 6th epoch to prevent overfitting. – Hamza Jan 20 '21 at 15:50
  • You can use the [`EarlyStopping`](https://keras.io/api/callbacks/early_stopping/) callback for that. It will stop the training when the metric which you are monitoring stops improving. – Mohil Patel Jan 20 '21 at 16:45
  • Can you write the code of early stopping in my above code . It will be helpful for me – Hamza Jan 21 '21 at 05:02
  • I have included it – Mohil Patel Jan 21 '21 at 07:01
  • Can I use train_test_split instead of validation split and I pass the parameters (test_data ,test_labels) of train_test_split to validation_data ? – Hamza Jan 21 '21 at 11:56
  • Yes you can. I did it because I was lazy :) – Mohil Patel Jan 21 '21 at 13:06
  • Mohil one last question from my side that how to set patience means how many number of epochs after which we may stopped the training – Hamza Jan 21 '21 at 13:12
  • That is no standard defined value. It depends on dataset, model, and other factors. You can get more information [here](https://stackoverflow.com/questions/50284898/keras-earlystopping-which-min-delta-and-patience-to-use) and other resources too. – Mohil Patel Jan 21 '21 at 14:42
  • @MohilPatel may I kindly draw your attention to a similar [question](https://stackoverflow.com/questions/71305465/how-can-plot-loss-curves-for-training-and-test-for-multi-output-regression-task). – Mario Mar 01 '22 at 08:14
0
checkpoint_callback = tf.keras.callbacks.ModelCheckpoint(filepath='CNN-logo.h5',
                      monitor='val_accuracy', mode='max', save_best_only=True)
earlystopping_callback = tf.keras.callbacks.EarlyStopping(monitor='loss', patience=3)

history = model.fit(x, y, epochs=50, batch_size=32, verbose=1, validation_split=0.2,
callbacks=[checkpoint_callback, earlystopping_callback])

model=tf.keras.models.load_model('CNN-logo.h5')
[train_loss, train_accuracy] = model.evaluate(x, y)
print("Evaluation result on Train Data : Loss = {}, accuracy = {}".format(train_loss, train_accuracy))
[test_loss, test_acc] = model.evaluate(x, y)
print("Evaluation result on Test Data : Loss = {}, accuracy = {}".format(test_loss, test_acc))

#Plot the loss curves
plt.figure(figsize=[8,6])
plt.plot(history.history['loss'],'r',linewidth=3.0)
plt.plot(history.history['val_loss'],'b',linewidth=3.0)
plt.legend(['Training loss', 'Validation Loss'],fontsize=18)
plt.xlabel('Epochs ',fontsize=16)
plt.ylabel('Loss',fontsize=16)
plt.title('Loss Curves',fontsize=1)
plt.show()

#Plot the Accuracy Curves
plt.figure(figsize=[8,6]) 
plt.plot(history.history['accuracy'],'r',linewidth=3.0) 
plt.plot(history.history['val_accuracy'],'b',linewidth=3.0)
plt.legend(['Training Accuracy', 'Validation Accuracy'],fontsize=18)
plt.xlabel('Epochs ',fontsize=16) 
plt.ylabel('Accuracy',fontsize=16)
plt.title('Accuracy Curves',fontsize=16)
plt.show()
Hamza
  • 530
  • 5
  • 27