3

Say I have the following data

import pandas as pd
data = {
    'Reference': [1, 2, 3, 4, 5],
    'Brand': ['Volkswagen', 'Volvo', 'Volvo', 'Audi', 'Volkswagen'],
    'Town': ['Berlin', 'Berlin', 'Stockholm', 'Munich', 'Berlin'],
    'Mileage': [35000, 45000, 121000, 35000, 181000],
    'Year': [2015, 2014, 2012, 2016, 2013]
 }
df = pd.DataFrame(data)

On which I would like to do one-hot encoding on the two columns "Brand" and "Town" in order to train a classifier (say with Scikit-Learn) and predict the year.

Once the classifier is trained I will want to predict the year on new incoming data (not use in the training), where I will need to re-apply the same hot encoding. For example:

new_data = {
    'Reference': [6, 7],
    'Brand': ['Volvo', 'Audi'],
    'Town': ['Stockholm', 'Munich']
}

In this context, what is the best way to do one-hot encoding of the 2 columns on the Pandas DataFrame knowing that there is a need to encode several columns, and that there is a need to be able to apply the same encoding on new data later.

This is a follow up question of How to re-use LabelBinarizer for input prediction in SkLearn

charlesreid1
  • 4,360
  • 4
  • 30
  • 52
Pierre-Antoine
  • 7,939
  • 6
  • 28
  • 36

2 Answers2

4

Consider the following approach.

Demo:

from sklearn.preprocessing import LabelBinarizer
from collections import defaultdict

d = defaultdict(LabelBinarizer)

In [7]: cols2bnrz = ['Brand','Town']

In [8]: df[cols2bnrz].apply(lambda x: d[x.name].fit(x))
Out[8]:
Brand    LabelBinarizer(neg_label=0, pos_label=1, spars...
Town     LabelBinarizer(neg_label=0, pos_label=1, spars...
dtype: object

In [10]: new = pd.DataFrame({
    ...:     'Reference': [6, 7],
    ...:     'Brand': ['Volvo', 'Audi'],
    ...:     'Town': ['Stockholm', 'Munich']
    ...: })

In [11]: new
Out[11]:
   Brand  Reference       Town
0  Volvo          6  Stockholm
1   Audi          7     Munich

In [12]: pd.DataFrame(d['Brand'].transform(new['Brand']), columns=d['Brand'].classes_)
Out[12]:
   Audi  Volkswagen  Volvo
0     0           0      1
1     1           0      0

In [13]: pd.DataFrame(d['Town'].transform(new['Town']), columns=d['Town'].classes_)
Out[13]:
   Berlin  Munich  Stockholm
0       0       0          1
1       0       1          0
MaxU - stand with Ukraine
  • 205,989
  • 36
  • 386
  • 419
1

You could use the get_dummies function pandas provides and convert the categorical values.

Something like this..

import pandas as pd
data = {
    'Reference': [1, 2, 3, 4, 5],
    'Brand': ['Volkswagen', 'Volvo', 'Volvo', 'Audi', 'Volkswagen'],
    'Town': ['Berlin', 'Berlin', 'Stockholm', 'Munich', 'Berlin'],
    'Mileage': [35000, 45000, 121000, 35000, 181000],
    'Year': [2015, 2014, 2012, 2016, 2013]
 }
df = pd.DataFrame(data)

train = pd.concat([df.get(['Mileage','Reference','Year']),
                           pd.get_dummies(df['Brand'], prefix='Brand'),
                           pd.get_dummies(df['Town'], prefix='Town')],axis=1)

For the test data you can:

new_data = {
    'Reference': [6, 7],
    'Brand': ['Volvo', 'Audi'],
    'Town': ['Stockholm', 'Munich']
}
test = pd.DataFrame(new_data)

test = pd.concat([test.get(['Reference']),
                           pd.get_dummies(test['Brand'], prefix='Brand'),
                           pd.get_dummies(test['Town'], prefix='Town')],axis=1)

# Get missing columns in the training test
missing_cols = set( train.columns ) - set( test.columns )
# Add a missing column in test set with default value equal to 0
for c in missing_cols:
    test[c] = 0
# Ensure the order of column in the test set is in the same order than in train set
test = test[train.columns]
Gayatri
  • 2,197
  • 4
  • 23
  • 35
  • 2
    What if the test set has a new unseen value for the one-hot-encoded columns? Will that be kept or removed in this approach. Excuse me but I'm asking because I could not understand the last line. – Vivek Kumar Oct 11 '17 at 01:47