7

I have a Dataframe (data) for which the head looks like the following:

          status      datetime    country    amount    city  
601766  received  1.453916e+09    France       4.5     Paris
669244  received  1.454109e+09    Italy        6.9     Naples

I would like to predict the status given datetime, country, amount and city

Since status, country, city are string, I one-hot-encoded them:

one_hot = pd.get_dummies(data['country'])
data = data.drop(item, axis=1) # Drop the column as it is now one_hot_encoded
data = data.join(one_hot)

I then create a simple LinearRegression model and fit my data:

y_data = data['status']
classifier = LinearRegression(n_jobs = -1)
X_train, X_test, y_train, y_test = train_test_split(data, y_data, test_size=0.2)
columns = X_train.columns.tolist()
classifier.fit(X_train[columns], y_train)

But I got the following error:

could not convert string to float: 'received'

I have the feeling I miss something here and I would like to have some inputs on how to proceed. Thank you for having read so far!

Mornor
  • 3,471
  • 8
  • 31
  • 69
  • Try `y_data = data['status'] == 'received'`, I am pretty sure `LinearRegression` is expecting a numeric/boolean variable here. – m-dz Feb 24 '21 at 11:13

3 Answers3

5

Consider the following approach:

first let's one-hot-encode all non-numeric columns:

In [220]: from sklearn.preprocessing import LabelEncoder

In [221]: x = df.select_dtypes(exclude=['number']) \
                .apply(LabelEncoder().fit_transform) \
                .join(df.select_dtypes(include=['number']))

In [228]: x
Out[228]:
        status  country  city      datetime  amount
601766       0        0     1  1.453916e+09     4.5
669244       0        1     0  1.454109e+09     6.9

now we can use LinearRegression classifier:

In [230]: classifier.fit(x.drop('status',1), x['status'])
Out[230]: LinearRegression(copy_X=True, fit_intercept=True, n_jobs=1, normalize=False)
MaxU - stand with Ukraine
  • 205,989
  • 36
  • 386
  • 419
  • Thanks a lot! Would you mind explaining why my initial solution did no work out? – Mornor Jun 01 '17 at 13:47
  • 1
    @Mornor, you are welcome. I guess `X_train[columns]` and/or `y_data` has some `string` columns, hence `could not convert string to float: 'received'` – MaxU - stand with Ukraine Jun 01 '17 at 13:48
  • 6
    I would like to add that your answer is partially correct. Indeed, it only LabelEncode the strings, and not one_hot encode them. That will create false results since some string will worth "more" than others. – Mornor Jun 27 '17 at 07:19
  • 2
    If anyone is wondering what Mornor means, this is because label encode will be numerical values. Ex: France = 0, Italy = 1, etc. That means that some cities are worth more than others. With one-hot encoding each city has the same value: Ex: France = [1, 0], Italy = [0,1]. Also don't forget to the dummy variable trap https://www.algosome.com/articles/dummy-variable-trap-regression.html. – Juan Acevedo Jan 20 '19 at 03:05
2

Alternative (because you should really avoid using LabelEncoder on features).

ColumnTransformer and OneHotEncoder can one-hot encode features in a dataframe:

ct = ColumnTransformer(
    transformers=[
        ("ohe", OneHotEncoder(sparse_output=False), ["country", "city"]),
    ],
    remainder="passthrough",
).set_output(transform="pandas")

print(ct.fit_transform(X))
   ohe__country_France  ohe__country_Italy  ohe__city_Naples  ohe__city_Paris  remainder__datetime  remainder__amount
0                  1.0                 0.0               0.0              1.0               1.4539                4.5
1                  0.0                 1.0               1.0              0.0               1.4541                6.9
2                  1.0                 0.0               0.0              1.0               1.4561                5.0

Full pipeline with LogisticRegression:

import pandas as pd
from sklearn.compose import ColumnTransformer
from sklearn.preprocessing import OneHotEncoder
from sklearn.pipeline import make_pipeline
from sklearn.linear_model import LogisticRegression

raw_data = pd.DataFrame([["received", 1.4539, "France", 4.5, "Paris"], ["received", 1.4541, "Italy", 6.9, "Naples"], ["not-received", 1.4561, "France", 5.0, "Paris"]], columns=["status", "datetime", "country", "amount", "city"])

# X features include all variables except 'status', y label is 'status':
X = raw_data.drop(["status"], axis=1)
y = raw_data["status"]

# Create a pipeline with OHE for "country" and "city", then fits Logistic Regression:
pipe = make_pipeline(
    ColumnTransformer(
        transformers=[
            ("one-hot-encode", OneHotEncoder(), ["country", "city"]),
        ],
        remainder="passthrough",
    ),
    LogisticRegression(),
)

pipe.fit(X, y)
Alexander L. Hayes
  • 3,892
  • 4
  • 13
  • 34
1

To do a one-hot encoding in a scikit-learn project, you may find it cleaner to use the scikit-learn-contrib project category_encoders: https://github.com/scikit-learn-contrib/categorical-encoding, which includes many common categorical variable encoding methods including one-hot.