84

I want to plot the output of this simple neural network:

model.compile(loss='binary_crossentropy', optimizer='adam', metrics=['accuracy'])
history = model.fit(x_test, y_test, nb_epoch=10, validation_split=0.2, shuffle=True)

model.test_on_batch(x_test, y_test)
model.metrics_names

I have plotted accuracy and loss of training and validation:

print(history.history.keys())
#  "Accuracy"
plt.plot(history.history['acc'])
plt.plot(history.history['val_acc'])
plt.title('model accuracy')
plt.ylabel('accuracy')
plt.xlabel('epoch')
plt.legend(['train', 'validation'], loc='upper left')
plt.show()
# "Loss"
plt.plot(history.history['loss'])
plt.plot(history.history['val_loss'])
plt.title('model loss')
plt.ylabel('loss')
plt.xlabel('epoch')
plt.legend(['train', 'validation'], loc='upper left')
plt.show()

Now I want to add and plot test set's accuracy from model.test_on_batch(x_test, y_test), but from model.metrics_names I obtain the same value 'acc' utilized for plotting accuracy on training data plt.plot(history.history['acc']). How could I plot test set's accuracy?

Trenton McKinney
  • 56,955
  • 33
  • 144
  • 158
Simone
  • 4,800
  • 12
  • 30
  • 46
  • Probable source of the original code: [Display Deep Learning Model Training History in Keras](https://machinelearningmastery.com/display-deep-learning-model-training-history-in-keras/) – Trenton McKinney Apr 28 '22 at 22:01

6 Answers6

141
import keras
from matplotlib import pyplot as plt
history = model1.fit(train_x, train_y,validation_split = 0.1, epochs=50, batch_size=4)
plt.plot(history.history['accuracy'])
plt.plot(history.history['val_accuracy'])
plt.title('model accuracy')
plt.ylabel('accuracy')
plt.xlabel('epoch')
plt.legend(['train', 'val'], loc='upper left')
plt.show()

Model Accuracy

plt.plot(history.history['loss'])
plt.plot(history.history['val_loss'])
plt.title('model loss')
plt.ylabel('loss')
plt.xlabel('epoch')
plt.legend(['train', 'val'], loc='upper left')
plt.show()

Model Loss

Noam Yizraeli
  • 4,446
  • 18
  • 35
Rahul Verma
  • 2,988
  • 2
  • 11
  • 26
  • 34
    Just a small addition: In updated Keras and Tensorflow 2.0, the keyword acc and val_acc have been changed to accuracy and val_accuracy accordingly. So, `plt.plot(history.history['acc']) plt.plot(history.history['val_acc'])` should be changed to `plt.plot(history.history['accuracy']) plt.plot(history.history['val_accuracy'])` (N.B. I am using Keras version 2.2.4) – EMT Nov 07 '20 at 09:42
  • 1
    I realized this and came back here to comment the same and I see you have already done that. That is SO is so great ! – Vasanth Nag K V Apr 28 '21 at 13:17
  • 1
    @EMT It does not depend on the Tensorflow version to use 'accuracy' or 'acc'. It depends on your own naming. `tf.version.VERSION` gives me `'2.4.1'`. I used 'accuracy' as the key and still got `KeyError: 'accuracy'`, but 'acc' worked. If you use `metrics=["acc"]`, you will need to call `history.history['acc']`. If you use `metrics=["categorical_accuracy"]` in case of `loss="categorical_crossentropy"`, you would have to call `history.history['categorical_accuracy']`, and so on. See `history.history.keys()` for all options. – questionto42 May 23 '21 at 16:47
40

Try

pd.DataFrame(history.history).plot(figsize=(8,5))
plt.show()

This builds a graph with the available metrics of the history for all datasets of the history. Example:

enter image description here

questionto42
  • 7,175
  • 4
  • 57
  • 90
Maged
  • 818
  • 1
  • 8
  • 17
23

It is the same because you are training on the test set, not on the train set. Don't do that, just train on the training set:

history = model.fit(x_test, y_test, nb_epoch=10, validation_split=0.2, shuffle=True)

Change into:

history = model.fit(x_train, y_train, nb_epoch=10, validation_split=0.2, shuffle=True)
Dr. Snoopy
  • 55,122
  • 7
  • 121
  • 140
  • I'm sorry, I always have utilized training set to train the NN, it's been an oversight. I am new in machine learning, and I am little bit confused about the result of `model.fit( ... )`, I get _loss_, _acc_, _val__loss_ and _val__acc_, I suppose that values represent loss and accuracy on training and validation, but where can I find the value of loss about the test? – Simone Jan 28 '17 at 12:15
  • @Simone You can use model.evaluate on the test set to get the loss and metrics over the test set. Just make you sure use the right variables. – Dr. Snoopy Jan 28 '17 at 15:35
  • I have used _model.evaluete_, and I get _accuracy_ and _loss_, but I can't plot them because I can't distinguish _accuracy_ obtained on training, from _accuracy_ obtained on test. – Simone Jan 28 '17 at 17:47
  • @Simone What do you mean can't distinguish? – Dr. Snoopy Jan 28 '17 at 20:01
  • I should have an accuracy on training, an accuracy on validation, and an accuracy on test; but I get only two values: _val__acc_ and _acc_, respectively for validation and training. From `model.evaluate(x_test, y_test)` `model.metrics_names` I get _acc_, the same of training. What am I doing wrong? – Simone Jan 28 '17 at 21:09
  • @Simone We don't have enough information to tell you what you are doing wrong. – Dr. Snoopy Jan 28 '17 at 22:15
  • Let us [continue this discussion in chat](http://chat.stackoverflow.com/rooms/134279/discussion-between-simone-and-matias-valdenegro). – Simone Jan 28 '17 at 23:01
10

Validate the model on the test data as shown below and then plot the accuracy and loss

model.compile(loss='binary_crossentropy', optimizer='adam', metrics=['accuracy'])
history = model.fit(X_train, y_train, nb_epoch=10, validation_data=(X_test, y_test), shuffle=True)
Ashok Kumar Jayaraman
  • 2,887
  • 2
  • 32
  • 40
6

You could do it this way also ....

regressor.compile(optimizer = 'adam', loss = 'mean_squared_error',metrics=['accuracy'])
earlyStopCallBack = EarlyStopping(monitor='loss', patience=3)
history=regressor.fit(X_train, y_train, validation_data=(X_test, y_test), epochs = EPOCHS, batch_size = BATCHSIZE, callbacks=[earlyStopCallBack])

For the plotting - I like plotly ... so

import plotly.graph_objects as go
from plotly.subplots import make_subplots

# Create figure with secondary y-axis
fig = make_subplots(specs=[[{"secondary_y": True}]])

# Add traces
fig.add_trace(
    go.Scatter( y=history.history['val_loss'], name="val_loss"),
    secondary_y=False,
)

fig.add_trace(
    go.Scatter( y=history.history['loss'], name="loss"),
    secondary_y=False,
)

fig.add_trace(
    go.Scatter( y=history.history['val_accuracy'], name="val accuracy"),
    secondary_y=True,
)

fig.add_trace(
    go.Scatter( y=history.history['accuracy'], name="val accuracy"),
    secondary_y=True,
)

# Add figure title
fig.update_layout(
    title_text="Loss/Accuracy of LSTM Model"
)

# Set x-axis title
fig.update_xaxes(title_text="Epoch")

# Set y-axes titles
fig.update_yaxes(title_text="<b>primary</b> Loss", secondary_y=False)
fig.update_yaxes(title_text="<b>secondary</b> Accuracy", secondary_y=True)

fig.show()

enter image description here

Nothing wrong with either of the proceeding methods. Please note the Plotly graph has two scales , 1 for loss the other for accuracy.

Tim Seed
  • 5,119
  • 2
  • 30
  • 26
1

Use accuracy and val_accuracy while plotting chart


Plotting accuracy in :

plt.plot(model.history.history["accuracy"], label="training accuracy")
plt.plot(model.history.history["val_accuracy"], label="validation accuracy")
plt.legend()
plt.show()

accuracy graph

Plotting loss in :

plt.plot(model.history.history["loss"], label="training loss")
plt.plot(model.history.history["val_loss"], label="validation loss")
plt.legend()
plt.show()

loss graph

Udesh
  • 2,415
  • 2
  • 22
  • 32