3

I am using the following code (source) to concatenate multiple feature extraction methods.

from sklearn.pipeline import Pipeline, FeatureUnion
from sklearn.model_selection import GridSearchCV
from sklearn.svm import SVC
from sklearn.datasets import load_iris
from sklearn.decomposition import PCA
from sklearn.feature_selection import SelectKBest

iris = load_iris()

X, y = iris.data, iris.target

pca = PCA(n_components=2)
selection = SelectKBest(k=1)

# Build estimator from PCA and Univariate selection:
combined_features = FeatureUnion([("pca", pca), ("univ_select", selection)])

# Use combined features to transform dataset:
X_features = combined_features.fit(X, y).transform(X)
print("Combined space has", X_features.shape[1], "features")

svm = SVC(kernel="linear")

# Do grid search over k, n_components and C:
pipeline = Pipeline([("features", combined_features), ("svm", svm)])

param_grid = dict(features__pca__n_components=[1, 2, 3],
                  features__univ_select__k=[1, 2],
                  svm__C=[0.1, 1, 10])

grid_search = GridSearchCV(pipeline, param_grid=param_grid, cv=5, verbose=10)
grid_search.fit(X, y)
print(grid_search.best_estimator_)

I want to get the names of the selected features from the above code.

For that, I used, grid_search.best_estimator_.support_. However, this returned an error saying:

AttributeError: 'Pipeline' object has no attribute 'support_'

Is there a way to get the selected feature names as shown in the above code in sklearn in python?

I am happy to provide more details if needed.

Has QUIT--Anony-Mousse
  • 76,138
  • 12
  • 138
  • 194
EmJ
  • 4,398
  • 9
  • 44
  • 105
  • It's possibly duplicate of the following SO https://stackoverflow.com/questions/36829875/how-to-get-feature-names-from-output-of-gridsearchcv – Ali Azam Apr 14 '19 at 07:52

1 Answers1

1

Here is my approach to know the final features, used by the best_estimator_

>>> features = grid_search.best_estimator_.named_steps['features']

# number of components chosen from pca
>>> pca=features.transformer_list[0][1]

>>> pca.n_components
3

# features chosen by selectKbest
>>> select_k_best=features.transformer_list[1][1]

>>> select_k_best.get_support()
array([False, False,  True, False])
Venkatachalam
  • 16,288
  • 9
  • 49
  • 77
  • 1
    Thanks a lot. Please let me know if you know an answer for this: https://stackoverflow.com/questions/55671530/why-do-i-get-different-values-with-pipline-and-without-pipline-in-sklearn-in-pyt Thank you very much :) – EmJ Apr 14 '19 at 11:57