I have an imbalanced dataset containing 4-class classification,i have built MultinomialNB and used k fold cross validation with 10 folds, i wanted to calculate accuracy, precision, recall, f1_score
so i used the following to do so
scoring = {'accuracy' : make_scorer(accuracy_score),
'precision' : make_scorer(precision_score),
'recall' : make_scorer(recall_score),
'f1_score' : make_scorer(f1_score)}
cv = KFold(n_splits=10, random_state=1, shuffle=True)
model = MultinomialNB()
scores = cross_validate(model, X_train_tfidf, y, scoring=scoring, cv=cv, n_jobs=-1)
print(scores)
but as an output all the arrays are NaN
'test_accuracy': array([nan, nan, nan, nan, nan, nan, nan, nan, nan, nan]),
'test_precision': array([nan, nan, nan, nan, nan, nan, nan, nan, nan, nan]),
'test_recall': array([nan, nan, nan, nan, nan, nan, nan, nan, nan, nan]),
'test_f1_score': array([nan, nan, nan, nan, nan, nan, nan, nan, nan, nan])}
i also used
scoring = {'accuracy' : 'accuracy',
'precision' : 'precision',
'recall' : 'recall'
}
but still the same result
So is there another why to calculate accuracy, precision, recall, f1_score, for k fold cross validation ?