I'm working on interpreting my XGB-Classifier which predicts the three classes "low", "medium", and "high". I saw that there are some approaches and python libraries (such as SHAP-Explainer) to interpret machine learning models. However, during my research I only found SHAP-applications on regression, or classifcation problems which output probabilities (but my model outputs 3 classes). Can I use SHAP on my problem?
My current approach is:
shap_values = explainer.shap_values(X)
shap.summary_plot(shap_values[classindex], X.values, feature_names = X.columns, show = False)
Classindex controls the 3 classes of the models and I'm filling it with 0, 1, and 2 in order to plot the summary plot for each of my classes.