1

Plotting Model Loss and Model Accuracy in sequential models with keras seems to be straightforward. However how can they be plotted if we split the data into X_train, Y_train, X_test, Y_test, and use cross-validation? I get the errors because it does not find 'val_acc'. That means I can not plot the results on the test set.

Here is my code:

# Create the model
def create_model(neurons = 379, init_mode = 'uniform', activation='relu', inputDim = 8040, dropout_rate=1.1, learn_rate=0.001, momentum=0.7, weight_constraint=6): #weight_constraint=
    model = Sequential()
    model.add(Dense(neurons, input_dim=inputDim, kernel_initializer=init_mode, activation=activation, kernel_constraint=maxnorm(weight_constraint), kernel_regularizer=regularizers.l2(0.002))) #, activity_regularizer=regularizers.l1(0.0001))) # one inner layer
    #model.add(Dense(200, input_dim=inputDim, activation=activation)) # second inner layer
    #model.add(Dense(60, input_dim=inputDim, activation=activation))  # second inner layer
    model.add(Dropout(dropout_rate))
    model.add(Dense(1, activation='sigmoid'))
    optimizer = RMSprop(lr=learn_rate)
    # compile model
    model.compile(loss='binary_crossentropy', optimizer='RmSprop', metrics=['accuracy']) #weight_constraint=weight_constraint
    return model

model = create_model() #weight constraint= 3 or 4

seed = 7
# Define k-fold cross validation test harness

kfold = StratifiedKFold(n_splits=3, shuffle=True, random_state=seed)
cvscores = []
for train, test in kfold.split(X_train, Y_train):
    print("TRAIN:", train, "VALIDATION:", test)

# Fit the model

    history = model.fit(X_train, Y_train, epochs=40, batch_size=50, verbose=0)

# Plot Model Loss and Model accuracy
    # list all data in history
    print(history.history.keys())
    # summarize history for accuracy
    plt.plot(history.history['acc'])
    plt.plot(history.history['val_acc'])  # RAISE ERROR
    plt.title('model accuracy')
    plt.ylabel('accuracy')
    plt.xlabel('epoch')
    plt.legend(['train', 'test'], loc='upper left')
    plt.show()
    # summarize history for loss
    plt.plot(history.history['loss'])
    plt.plot(history.history['val_loss']) #RAISE ERROR
    plt.title('model loss')
    plt.ylabel('loss')
    plt.xlabel('epoch')
    plt.legend(['train', 'test'], loc='upper left')
    plt.show()

I would appreciate some necessary changes on it to get those plots also for the test.

cosinepenguin
  • 1,545
  • 1
  • 12
  • 21
Mauro Nogueira
  • 131
  • 2
  • 13
  • Can we see the error you are getting? Something about `val_acc`? – cosinepenguin Jul 27 '17 at 15:58
  • For sure. plt.plot(history.history['val_acc']) KeyError: 'val_acc'. If i remove the lines plt.plot(history.history['val_acc']) it returns me the plots for each cros-validated dataset (train). – Mauro Nogueira Jul 27 '17 at 16:10

1 Answers1

2

According to the Keras.io documentation, it seems like in order to be able to use 'val_acc' and 'val_loss' you need to enable validation and accuracy monitoring. Doing so would be as simple as adding a validation_split to the model.fit in your code!

Instead of:

history = model.fit(X_train, Y_train, epochs=40, batch_size=50, verbose=0)

You would need to do something like:

history = model.fit(X_train, Y_train, validation_split=0.33, epochs=40, batch_size=50, verbose=0)

This is because typically, the validation happens during 1/3 of the trainset.

Here's an additional potentially helpful source:

Plotting learning curve in keras gives KeyError: 'val_acc'

Hope it helps!

cosinepenguin
  • 1,545
  • 1
  • 12
  • 21
  • That would be a solution but I am using cros-validation. But i jsut tried to use: `history = model.fit(X_train, Y_train, epochs=42, batch_size=50, validation_data=(X_test,Y_test), verbose=0)` and this worked. – Mauro Nogueira Jul 27 '17 at 16:42