I experienced two situations where this error arises:
- introducing a custom metric
- using multiple outputs
In both cases the acc
and val_acc
are not computed. Strangely, Keras does compute an overall loss
and val_loss
.
You can remedy the first situation by adding accuracy
to the metrics but that may have side effects, I am not sure. In both cases however, you can add acc
and val_acc
yourself in a callback. I have added an example for the multi output case where I have created a custom callback in which I compute my own acc
and val_acc
results by averaging over all val's and val_acc's of the output layers.
I have a model having are 5 dense output layers at the end, labeled D0..D4. The output of one epoch is as follows:
3540/3540 [==============================] - 21s 6ms/step - loss: 14.1437 -
D0_loss: 3.0446 - D1_loss: 2.6544 - D2_loss: 3.0808 - D3_loss: 2.7751 -
D4_loss: 2.5889 - D0_acc: 0.2362 - D1_acc: 0.3681 - D2_acc: 0.1542 - D3_acc: 0.1161 -
D4_acc: 0.3994 - val_loss: 8.7598 - val_D0_loss: 2.0797 - val_D1_loss: 1.4088 -
val_D2_loss: 2.0711 - val_D3_loss: 1.9064 - val_D4_loss: 1.2938 -
val_D0_acc: 0.2661 - val_D1_acc: 0.3924 - val_D2_acc: 0.1763 -
val_D3_acc: 0.1695 - val_D4_acc: 0.4627
As you can see it outputs an overall loss
and val_loss
and for each output layer: Di_loss, Di_acc, val_Di_loss and val_Di_acc, for i in 0..4. All of this is the content of the logs
dictionary which is transmitted as a parameter in on_epoch_begin
and on_epoch_end
of a callback. Callbacks have more event handlers but for our purpose these two are the most relevant. When you have 5 outputs (as in my case) then the size of the dictionary is 5 times 4(acc, loss, val_acc, val_loss) + 2 (loss+val_loss).
What I did is compute the average of all accuracies and validation accuracies to add two items to logs
:
logs['acc'] = som_acc / n_accs
logs['val_acc'] = som_val_acc / n_accs
Be sure you add this callback before the checkpoint callback, else the extra information you provide will not bee 'seen'. If all is implemented correctly the error message does not appear anymore and the model is happily checkpointing.
The code of my callback for the multiple output case is provided below.
class ExtraLogInfo(keras.callbacks.Callback):
def on_epoch_begin(self, epoch, logs):
self.timed = time.time()
return
def on_epoch_end(self, epoch, logs):
print(logs.keys())
som_acc = 0.0
som_val_acc = 0.0
n_accs = (len(logs) - 2) // 4
for i in range(n_accs):
acc_ptn = 'D{:d}_acc'.format(i)
val_acc_ptn = 'val_D{:d}_acc'.format(i)
som_acc += logs[acc_ptn]
som_val_acc += logs[val_acc_ptn]
logs['acc'] = som_acc / n_accs
logs['val_acc'] = som_val_acc / n_accs
logs['time'] = time.time() - self.timed
return