0

For a regression problem I want to compare some metrics but I am only able to get accuracy from the history which makes no sense wrt to regression purposes. How can I get other metrics like mean_squared_error and so on?

create_model(...)
    input_layer = ...
    output_laye = ...
    model = Model(input_layer, output_layer)
    model.compile(loss='mean_squared_error', optimizer=optimizer, metrics=['accuracy'])
    return model

model = KerasRegressor(build_fn=create_model, verbose=0)

batch_size = [1, 2]
epochs = [1, 2]
optimizer = ['Adam', 'sgd']    
param_grid = dict(batch_size=batch_size
                     , optimizer = optimizer
                     )

grid_obj  = RandomizedSearchCV(estimator=model 
                    , param_grid=hypparas
                    , n_jobs=1
                    , cv = 3
                    , scoring = ['explained_variance', 'neg_mean_squared_error', 'r2']
                    , refit = 'neg_mean_squared_error'
                    , return_train_score=True
                    , verbose = 2
                    )

grid_result = grid_obj.fit(X_train1, y_train1)

X_train1, X_val1, y_train1, y_val1 = train_test_split(X_train1, y_train1, test_size=0.2, shuffle=False)

grid_best = grid_result.best_estimator_
history = grid_best.fit(X_train1, y_train1
                        , validation_data=(X_val1, y_val1)
                        )

print(history.history.keys())
> dict_keys(['val_loss', 'val_accuracy', 'loss', 'accuracy'])

I have seen https://stackoverflow.com/a/50137577/6761328 to get e.g.

history.history['accuracy']

which works but I can't access mean_squared_error or something else:

history.history['neg_mean_squared_error']
---------------------------------------------------------------------------
KeyError                                  Traceback (most recent call last)
<ipython-input-473-eb96973bf014> in <module>
----> 1 history.history['neg_mean_squared_error']

KeyError: 'neg_mean_squared_error'

This question is finally a follow-up on How to compare different metrics? as I think this question is the answer to the other one.

Ben
  • 1,432
  • 4
  • 20
  • 43

1 Answers1

2

In stand-alone Keras (not sure for the scikit-learn wrapper), history.history['loss'] (or val_loss respectively for the validation set) would do the job.

Here, 'loss' and 'val_loss' are keys; give

print(history.history.keys())

to see what keys are available in your case, and you will find among them the required ones for the loss (might even be the same, i.e. 'loss' and 'val_loss').

As a side note, you should remove completely metrics=['accuracy'] from your model compilation - as you correctly point out, accuracy is meaningless in regression settings (you might want to check What function defines accuracy in Keras when the loss is mean squared error (MSE)?).

desertnaut
  • 57,590
  • 26
  • 140
  • 166
  • Thank you! So "loss" is basically mse or mae and so on? And I can only use of them in parallel? – Ben Oct 25 '19 at 05:01
  • 1
    @Ben yes, it is what you have used as `loss` in your model compilation (what do you mean in parallel?) – desertnaut Oct 25 '19 at 06:50
  • Ok, still a question. When using multiple losses, how do I know to which exactly is "loss" refering to? – Ben Oct 25 '19 at 07:20
  • 1
    @Ben You cannot use multiple *losses* in model compilation; loss is what is being optimized by the learning algorithm, and this is unique per model. – desertnaut Oct 25 '19 at 07:34
  • When I have the keys: "print(history.history.keys()) > dict_keys(['val_loss', 'val_mean_squared_error', 'loss', 'mean_squared_error'])" what does this mean? is loss = mean_squared_error? – Ben Oct 25 '19 at 08:16
  • @Ben based on your `model.compile` statement, it should be. Are the values the same? – desertnaut Oct 25 '19 at 09:09