7

I would like to know if there is any way to visualize or find the most important/contributing features after fitting a MLP classifier in Sklearn.

Simple example:

import pandas as pd
import numpy as np
from sklearn.preprocessing import StandardScaler
from sklearn.model_selection import LeaveOneOut
from sklearn.neural_network import MLPClassifier
from sklearn.model_selection import GridSearchCV
from sklearn.pipeline import make_pipeline


data= pd.read_csv('All.csv', header=None)
X, y = data.iloc[0:, 0:249].values, data.iloc[0:,249].values

sc = StandardScaler()
mlc = MLPClassifier(activation = 'relu', random_state=1,nesterovs_momentum=True)
loo = LeaveOneOut()
pipe = make_pipeline(sc, mlc)

parameters = {"mlpclassifier__hidden_layer_sizes":[(168,),(126,),(498,),(166,)],"mlpclassifier__solver" : ('sgd','adam'), "mlpclassifier__alpha": [0.001,0.0001],"mlpclassifier__learning_rate_init":[0.005,0.001] }
clf = GridSearchCV(pipe, parameters,n_jobs= -1,cv = loo)
clf.fit(X, y)

model = clf.best_estimator_
print("the best model and parameters are the following: {} ".format(model))
seralouk
  • 30,938
  • 9
  • 118
  • 133

1 Answers1

9

Good question. The lack of interpretability of NN models is one pain the ML/NN community has been struggling with.

One recent approach that has been receiving attention is the LIME paper (Ribeiro et al, KDD'16). Here's a relevant excerpt from the abstract:

  • "In this work, we propose LIME, a novel explanation technique that explains the predictions of any classifier in an interpretable and faithful manner, by learning an interpretable model locally around the prediction".

There's also a GitHub repository (Python, yay!).

(If you do try LIME, please share your experience in the question comments..)

Tomer Levinboim
  • 992
  • 12
  • 18