3

Keras gives the overall training and validation accuracy during training.

enter image description here

Is there any way to get a per-class validation accuracy during training?

Update: Error log from Pycharm

File "C:/Users/wj96hq/PycharmProjects/PedestrianClassification/Awareness.py", line 82, in <module>
shuffle=True, callbacks=callbacks)
File "C:\Users\wj96hq\AppData\Roaming\Python\Python37\site-packages\tensorflow\python\keras\engine\training.py", line 66, in _method_wrapper
return method(self, *args, **kwargs)
File "C:\Users\wj96hq\AppData\Roaming\Python\Python37\site-packages\tensorflow\python\keras\engine\training.py", line 876, in fit
callbacks.on_epoch_end(epoch, epoch_logs)
File "C:\Users\wj96hq\AppData\Roaming\Python\Python37\site-packages\tensorflow\python\keras\callbacks.py", line 365, in on_epoch_end
callback.on_epoch_end(epoch, logs)
File "C:/Users/wj96hq/PycharmProjects/PedestrianClassification/Awareness.py", line 36, in on_epoch_end
x_test, y_test = self.validation_data[0], self.validation_data[1]
TypeError: 'NoneType' object is not subscriptable
iamkk
  • 135
  • 1
  • 16
  • 1
    Hey .. you will have to write your custom metric .. you can check this :: https://stackoverflow.com/questions/37657260/how-to-implement-custom-metric-in-keras – Darth Vader Aug 13 '20 at 11:24

3 Answers3

4

Use this to get per class accuracy :


model.compile(optimizer='adam', loss='categorical_crossentropy', metrics=['accuracy'])


class Metrics(keras.callbacks.Callback):
    def on_train_begin(self, logs={}):
        self._data = []

    def on_epoch_end(self, batch, logs={}):
        x_test, y_test = self.validation_data[0], self.validation_data[1]
        y_predict = np.asarray(model.predict(x_test))

        true = np.argmax(y_test, axis=1)
        pred = np.argmax(y_predict, axis=1)
        
        cm = confusion_matrix(true, pred)
        cm = cm.astype('float') / cm.sum(axis=1)[:, np.newaxis]
        self._data.append({
            'classLevelaccuracy':cm.diagonal() ,
        })
        return

    def get_data(self):
        return self._data

metrics = Metrics()
history = model.fit(x_train, y_train, epochs=100, validation_data=(x_test, y_test), callbacks=[metrics])
metrics.get_data()

you can make the code change in the metrics class. As you like it ..and this working . Yuo just use metrics.get_data() to get all the info..

Darth Vader
  • 881
  • 2
  • 7
  • 24
  • Here the `true` is validation labels and `pred` is the one from the model right? But I still don't understand how does this become per-class validation accuracy during training. This is performed once the training is completed, am I right? Please correct if I have understood wrong. – iamkk Aug 13 '20 at 13:04
  • 1
    you will have to implement something like this : https://stackoverflow.com/questions/37657260/how-to-implement-custom-metric-in-keras – Darth Vader Aug 13 '20 at 13:06
  • Hey @keertikulkarni made some changes ..now it will work – Darth Vader Aug 13 '20 at 13:42
  • I have bumped into an issue. So this logic works fine when I train in Google colab. But the same logic leads to an error `TypeError: 'NoneType' object is not subscriptable` when I tried training in Pycharm. Any suggestions? – iamkk Aug 18 '20 at 13:23
  • Sorry, i don't use pycharm ... but in which line do you get the error. Can you show the error log ? – Darth Vader Aug 18 '20 at 13:48
  • I have updated the logs in question. And the error exists in `x_test, y_test = self.validation_data[0], self.validation_data[1]` – iamkk Aug 18 '20 at 14:04
  • 1
    have you checked if the keras / tensorflow version in google colab is same as in your local? – Darth Vader Aug 18 '20 at 14:13
  • Yes, the issue was with the version. I had to downgrade the Keras version in my local. Thank you. – iamkk Aug 18 '20 at 14:30
2

Well, accuracy is a global metric and there's no such thing as per-class accuracy. Perhaps you mean proportion of the class correctly identified, that's the exact definition of TPR or recall.

Please refer to answers to this, and this, questions on SO, and this question from Cross Validated StackExchange.

arilwan
  • 3,374
  • 5
  • 26
  • 62
0

If you want to get the accuracy for a certain class, or a group of certain classes, masking can be a good solution. See this code:

def cus_accuracy(real, pred):

    score = accuracy(real, pred)
    mask = tf.math.greater_equal(real, 5)
    mask = tf.cast(mask, dtype=real.dtype)
    score *= mask

    mask2 = tf.math.less_equal(real, 10)
    mask2 = tf.cast(mask2, dtype=real.dtype)
    score *= mask2

return tf.reduce_mean(score)

This metric gives you the accuracy for the classes 5 to 10. I used it for measuring the accuracy for certain words in a seq2seq model.

MichaelJanz
  • 1,775
  • 2
  • 8
  • 23