I am trying to setup a custom scorer in sklearn (using make_scorer) to use during cross-validation. Specifically, I want to calculate Top2-accuracy for a multi-class classification example.
Here, technically, my problem is that I need to evaluate the probabilities (using needs_proba=True) and need the list of classes in order to make sense of the probability matrix.
I have compiled an example below. While I can setup the custom scoring function for a non-cv example by providing the classes in the make_scorer call, I am not able to set this up properly for the cv-case, where the classes will be determined dynamically and thus I need to read them in only during the evaluation.
I know that there are many similar questions, but I did not see a working solution for my specific use case, thus I would be great if somebody could help me (Excuse my ignorance in case this is solved somewhere).
Thanks a lot in advance! David
PS: If I am not mistaken, for all use cases of make_scorer that involve the probabilities, actually the class labels should be crucial, thus I assume that this is a generic problem.
from sklearn.datasets import load_iris
from sklearn.model_selection import train_test_split
from sklearn.linear_model import LogisticRegression
from sklearn.metrics import make_scorer
from sklearn.metrics import accuracy_score
from sklearn.model_selection import StratifiedKFold
from sklearn.model_selection import cross_validate
data = load_iris()
X = data.data
y = data.target
# DIRECT USE OF CUSTOM SCORER ##################################################################################
# Simple test train split
X_train, X_test, y_train, y_test = train_test_split(X, y)
# Define the model and fit it
model = LogisticRegression()
model.fit(X = X_train, y = y_train)
# Function that returns either the prediction with the highest likelihood or the correct prediction,
# if it is within Top n by probability
def top_n_consolidation(y_label, y_prob, class_names, n=2):
top_recs = class_names[[i[0] for i in sorted(enumerate(y_prob), key=lambda x:x[1], reverse=True)][0:n]]
if any([i == y_label for i in top_recs]):
return y_label
else:
return top_recs[0]
# Calculate accuracy based on Top n predictions
# --> NOTE: THIS FUNCTION RELIES ON class_names IN ORDER TO MAKE USE OF THE PROBABILITIES
def accuracy_top_n_function(y_labels, y_probs, class_names, n=2):
cons_preds = [top_n_consolidation(y_labels[i], y_probs[i,:], class_names, n) for i in range(y_probs.shape[0])]
return accuracy_score(y_true=y_labels, y_pred=cons_preds)
# Make a custom scorer for Top 2 classifications
accuracy_2 = make_scorer(accuracy_top_n_function, class_names = model.classes_, n=2, needs_proba = True)
# --> NOTE: THIS WORKS, BECAUSE model.fit WAS ALREADY EXECUTED
# Calculate Top 2 accuracies
accuracy_2(clf=model, X=X_test, y=y_test)
# USE OF CUSTOM SCORER FOR CROSS-VALIDATION ####################################################################
# Define a new model to ensure that we distinguish from the case above
model_cv = LogisticRegression()
# Define custom scorer for the cv case
accuracy_2_cv = make_scorer(accuracy_top_n_function, class_names = model_cv.classes_, n=2, needs_proba = True)
# NOTE: THIS IS NOT WORKING AS model_cv.classes_ IS NOT YET KNOWN!
# Define custom scores to use
custom_scoring = {'acc' : 'accuracy',
'acc2' : accuracy_2_cv}
cross_validate(model_cv, X, y, cv=3, scoring = custom_scoring, return_train_score=True)