9

I have a model in Keras which I'm optimizing the mean squared error. However, if I use the same code as in losses.py from Keras in the metric, I get a different result. Why is this?

As a metric:

def MSE_metric(y_true, y_pred):
    return K.mean(K.square(y_pred, y_true))

For the model:

model.compile(optimizer=SGD(lr=0.01, momntum=0.9), loss='MSE', metrics=[MSE_metric])

This results in a loss of 6.07 but an MSE_metric of 0.47

Marcin Możejko
  • 39,542
  • 10
  • 109
  • 120
user3126802
  • 419
  • 8
  • 19

2 Answers2

23

Remember - that if you use any kind of regularization - it affects your loss. Your actual loss is equal to:

loss = mse + regularization

and this is where your discrepancy comes from.

Marcin Możejko
  • 39,542
  • 10
  • 109
  • 120
1

Marcin is right. Here I've explored the effect of regularization and division into batches. Both affect the training loss recorded in logs, but regularization has the largest effect. It is always advisable to compute metrics using model.evaluate after fitting the model. If wanting to see the 'actual' loss during training, one trick is to set the validation set identical to the training set (or a subset of the training set, if there is too much data). Validation metrics are simply evaluated on the fitted model, unlike training loss.

sparklingdew
  • 169
  • 1
  • 4