0

I have a Pipeline built as follows:

Pipeline(steps=[('preprocessor',
                 ColumnTransformer(remainder='passthrough',
                                   transformers=[('text',
                                                  Pipeline(steps=[('CV',
                                                                   CountVectorizer())]),
                                                  'Tweet'),
                                                 ('category',
                                                  OneHotEncoder(handle_unknown='ignore'),
                                                  ['Tweet_ID']),
                                                 ('numeric',
                                                  Pipeline(steps=[('knnImputer',
                                                                   KNNImputer(n_neighbors=2)),
                                                                  ('scaler',
                                                                   MinMaxScale...
                                                   'CS',
                                                   'UC',
                                                   'CL',
                                                   'S',
                                                   'SS',
                                                   'UW',
                                                    ...])])),
                ('classifier', LogisticRegression())])

I am trying to get feature names:

feature_names = lr['preprocessor'].transformers_[0][1].get_feature_names()
coefs = lr.named_steps["classifier"].coef_.flatten()

zipped = zip(feature_names, coefs)
features_df = pd.DataFrame(zipped, columns=["feature", "value"])
features_df["ABS"] = features_df["value"].apply(lambda x: abs(x))
features_df["colors"] = features_df["value"].apply(lambda x: "green" if x > 0 else "red")
features_df = features_df.sort_values("ABS", ascending=False)
features_df

However I am getting an error:

----> 6 feature_names = lr['preprocessor'].transformers_[0][1].get_feature_names()
      7 coefs = lr.named_steps["classifier"].coef_.flatten()
      8 

AttributeError: 'Pipeline' object has no attribute 'get_feature_names

I already went through the following answers:

but unfortunately they were not so helpful as I would have expected.

Does anyone know how to fix it? Happy to provide more info, if needed.


An example of pipeline is the following:

lr = Pipeline(steps=[('preprocessor', preprocessing),
                      ('classifier', LogisticRegression(C=5, tol=0.01, solver='lbfgs', max_iter=10000))])

where preprocessing is

preprocessing = ColumnTransformer(
    transformers=[
        ('text',text_preprocessing, 'Tweet'),
        ('category', categorical_preprocessing, c_feat),
        ('numeric', numeric_preprocessing, n_feat)
], remainder='passthrough')

I am separating before splitting train and test sets the different types of features:

text_columns=['Tweet']

target=['Label']

c_feat=['Tweet_ID']

num_features=['CS','UC','CL','S','SS','UW']

Following David's answer and link, I have tried as follows:

For numerical:

class NumericalTransformer(BaseEstimator, TransformerMixin):
    def __init__(self):
        super().__init__()

    def fit(self, X, y=None):
        return self

    def transform(self, X, y=None):
        # Numerical features to pass down the numerical pipeline
        X = X[[num_features]]
        X = X.replace([np.inf, -np.inf], np.nan)
        return X.values
# Defining the steps in the numerical pipeline
numerical_pipeline = Pipeline(steps=[
    ('num_transformer', NumericalTransformer()),
    ('imputer', KNNImputer(n_neighbors=2)),
    ('minmax', MinMaxScaler())])

For categorical:

class CategoricalTransformer(BaseEstimator, TransformerMixin):
    def __init__(self):
        super().__init__()

    # Return self nothing else to do here
    def fit(self, X, y=None):
        return self

    # Helper function that converts values to Binary depending on input
    def create_binary(self, obj):
        if obj == 0:
            return 'No'
        else:
            return 'Yes'

    # Transformer method for this transformer
    def transform(self, X, y=None):
        # Categorical features to pass down the categorical pipeline
        return X[[c_feat]].values
# Defining the steps in the categorical pipeline
categorical_pipeline = Pipeline(steps=[
    ('cat_transformer', CategoricalTransformer()),
    ('one_hot_encoder', OneHotEncoder(handle_unknown='ignore'))])

and for text feature:

class TextTransformer(BaseEstimator, TransformerMixin):
    def __init__(self):
        super().__init__()

    # Return self nothing else to do here
    def fit(self, X, y=None):
        return self

    # Helper function that converts values to Binary depending on input
    def create_binary(self, obj):
        if obj == 0:
            return 'No'
        else:
            return 'Yes'

    # Transformer method for this transformer
    def transform(self, X, y=None):
        # Text features to pass down the text pipeline
        return X[['Tweet']].values
# Defining the steps in the text pipeline
text_pipeline = Pipeline(steps=[
    ('text_transformer', TextTransformer()),
    ('cv', CountVectorizer())])

Then I combine numerical, text and categorical pipeline into one full big pipeline horizontally:

# using FeatureUnion
union_pipeline = FeatureUnion(transformer_list=[
    ('categorical_pipeline', categorical_pipeline),
    ('numerical_pipeline', numerical_pipeline), 
    ('text_pipeline', text_pipeline)])

and finally:

# Combining the custom imputer with the categorical, text and numerical pipeline
preprocess_pipeline = Pipeline(steps=[('custom_imputer', CustomImputer()),
                                      ('full_pipeline', union_pipeline)])

What it is still not clear is how to get features names.

Math
  • 191
  • 2
  • 5
  • 19

1 Answers1

1

You need to implement a dedicated get_feature_names function, as you are using a custom transformer.

Please refer to this question for details, where you can find a code example.

David Thery
  • 669
  • 1
  • 6
  • 21
  • 1
    Thanks David. I added more code after your answer. It is however not fully clear how to get feature names. Could you please have a look at the updated question that should consider your suggestion? Many thanks. If you think more appropriate, I will ask a new question on this – Math May 17 '21 at 11:40
  • I'll take a look as soon as I can, but quite busy this afternoon :) – David Thery May 17 '21 at 11:41
  • 1
    thanks a million, David. Very much appreciated it :) – Math May 17 '21 at 11:44