I am using custom mertrics for a multi-class classification task. I am using the code i found on internet.
The class for custom metrics is:
import numpy as np
import keras
from keras.callbacks import Callback
from sklearn.metrics import confusion_matrix, f1_score, precision_score, recall_score
class Metrics(keras.callbacks.Callback):
def on_train_begin(self, logs={}):
self.confusion = []
self.precision = []
self.recall = []
self.f1s = []
def on_epoch_end(self, epoch, logs={}):
score = np.asarray(self.model.predict(self.validation_data[0]))
predict =
np.round(np.asarray(self.model.predict(self.validation_data[0])))
targ = self.validation_data[1]
self.f1s.append(sklm.f1_score(targ, predict,average='micro'))
self.confusion.append(sklm.confusion_matrix(targ.argmax(axis=1),predict.argmax(axis=1)))
return confusion, precision, recall, f1s
While using the object of class Metrics in model.fit:
history = model.fit(X_train, np.array(Y_train),
batch_size=32,
epochs=10,
validation_data=(X_test, np.array(Y_test)),
#validation_split=0.1,
verbose=2,
callbacks=[Metrics()])
I encountered the following error:
TypeError: 'NoneType' object is not subscriptable
Traceback:
Epoch 1/10
--------------------------------------------------------------
TypeError Traceback (most recent call last)
<ipython-input-63-1a11cfdbd329> in <module>()
6 #validation_split=0.1,
7 verbose=2,
----> 8 callbacks=[Metrics()])
3 frames
<ipython-input-62-8073719b4ec0> in on_epoch_end(self, epoch, logs)
12
13 def on_epoch_end(self, epoch, logs={}):
---> 14 score =
np.asarray(self.model.predict(self.validation_data[0]))
15 predict =
np.round(np.asarray(self.model.predict(self.validation_data[0])))
16 targ = self.validation_data[1]
TypeError: 'NoneType' object is not subscriptable
Any idea why it's a NoneType object although I am giving the return parameters in the class methods?
Update:
I believe that the problem might be with the dataset i am using and the structure of the data might be causing the errors with the custom metrics. However, there is one solution that seems to work with my data.
import keras.backend as K
def f1_metric(y_true, y_pred):
true_positives = K.sum(K.round(K.clip(y_true * y_pred, 0, 1)))
possible_positives = K.sum(K.round(K.clip(y_true, 0, 1)))
predicted_positives = K.sum(K.round(K.clip(y_pred, 0, 1)))
precision = true_positives / (predicted_positives + K.epsilon())
recall = true_positives / (possible_positives + K.epsilon())
f1_val = 2*(precision*recall)/(precision+recall+K.epsilon())
return f1_val
model.compile(...,metrics=['accuracy', f1_metric])
source: https://datascience.stackexchange.com/questions/48246/how-to-compute-f1-in-tensorflow