I have two numpy arrays and each contains 7 predictions from two classifiers. I want to calculate the agreement between the two classifiers using cohen kappa for each instance predicted. however, when the predictions from the two classifiers contains only zeros for instance: predictions_model1[0] = [0,0,0,0,0,0,0] and predictions_model2= [0,0,0,0,0,0,0], I get the following warning:
RuntimeWarning: invalid value encountered in true_divide
k = np.sum(w_mat * confusion) / np.sum(w_mat * expected)
my code is below:
for i in range(len(model1_pool_preds)):
print(
cohen_kappa_score(model1_pool_preds[i],model2_pool_preds[i]))
How can I get rid of this warning? Is it possible to get a score of 1.0 when all labels are the same, even if they are a sequence of zeros?