143

I'm learning different methods to convert categorical variables to numeric for machine-learning classifiers. I came across the pd.get_dummies method and sklearn.preprocessing.OneHotEncoder() and I wanted to see how they differed in terms of performance and usage.

I found a tutorial on how to use OneHotEncoder() on https://xgdgsc.wordpress.com/2015/03/20/note-on-using-onehotencoder-in-scikit-learn-to-work-on-categorical-features/ since the sklearn documentation wasn't too helpful on this feature. I have a feeling I'm not doing it correctly...but

Can some explain the pros and cons of using pd.dummies over sklearn.preprocessing.OneHotEncoder() and vice versa? I know that OneHotEncoder() gives you a sparse matrix but other than that I'm not sure how it is used and what the benefits are over the pandas method. Am I using it inefficiently?

import pandas as pd
import numpy as np
from sklearn.datasets import load_iris
sns.set()

%matplotlib inline

#Iris Plot
iris = load_iris()
n_samples, m_features = iris.data.shape

#Load Data
X, y = iris.data, iris.target
D_target_dummy = dict(zip(np.arange(iris.target_names.shape[0]), iris.target_names))

DF_data = pd.DataFrame(X,columns=iris.feature_names)
DF_data["target"] = pd.Series(y).map(D_target_dummy)
#sepal length (cm)  sepal width (cm)  petal length (cm)  petal width (cm)  \
#0                  5.1               3.5                1.4               0.2   
#1                  4.9               3.0                1.4               0.2   
#2                  4.7               3.2                1.3               0.2   
#3                  4.6               3.1                1.5               0.2   
#4                  5.0               3.6                1.4               0.2   
#5                  5.4               3.9                1.7               0.4   

DF_dummies = pd.get_dummies(DF_data["target"])
#setosa  versicolor  virginica
#0         1           0          0
#1         1           0          0
#2         1           0          0
#3         1           0          0
#4         1           0          0
#5         1           0          0

from sklearn.preprocessing import OneHotEncoder, LabelEncoder
def f1(DF_data):
    Enc_ohe, Enc_label = OneHotEncoder(), LabelEncoder()
    DF_data["Dummies"] = Enc_label.fit_transform(DF_data["target"])
    DF_dummies2 = pd.DataFrame(Enc_ohe.fit_transform(DF_data[["Dummies"]]).todense(), columns = Enc_label.classes_)
    return(DF_dummies2)

%timeit pd.get_dummies(DF_data["target"])
#1000 loops, best of 3: 777 µs per loop

%timeit f1(DF_data)
#100 loops, best of 3: 2.91 ms per loop
O.rka
  • 29,847
  • 68
  • 194
  • 309

5 Answers5

242

For machine learning, you almost definitely want to use sklearn.OneHotEncoder. For other tasks like simple analyses, you might be able to use pd.get_dummies, which is a bit more convenient.

Note that sklearn.OneHotEncoder has been updated in the latest version so that it does accept strings for categorical variables, as well as integers.

The crux of it is that the sklearn encoder creates a function which persists and can then be applied to new data sets which use the same categorical variables, with consistent results.

from sklearn.preprocessing import OneHotEncoder

# Create the encoder.
encoder = OneHotEncoder(handle_unknown="ignore")
encoder.fit(X_train)    # Assume for simplicity all features are categorical.

# Apply the encoder.
X_train = encoder.transform(X_train)
X_test = encoder.transform(X_test)

Note how we apply the same encoder we created via X_train to the new data set X_test.

Consider what happens if X_test contains different levels than X_train for one of its variables. For example, let's say X_train["color"] contains only "red" and "green", but in addition to those, X_test["color"] sometimes contains "blue".

If we use pd.get_dummies, X_test will end up with an additional "color_blue" column which X_train doesn't have, and the inconsistency will probably break our code later on, especially if we are feeding X_test to an sklearn model which we trained on X_train.

And if we want to process the data like this in production, where we're receiving a single example at a time, pd.get_dummies won't be of use.

With sklearn.OneHotEncoder on the other hand, once we've created the encoder, we can reuse it to produce the same output every time, with columns only for "red" and "green". And we can explicitly control what happens when it encounters the new level "blue": if we think that's impossible, then we can tell it to throw an error with handle_unknown="error"; otherwise we can tell it to continue and simply set the red and green columns to 0, with handle_unknown="ignore".

Denziloe
  • 7,473
  • 3
  • 24
  • 34
  • 41
    I believe this answer has far greater impact than the accepted. The real magic is handling unknown categorical features which are bound to pop up in production. – barker Jun 26 '19 at 20:24
  • 5
    I think this is a better, more complete answer than the accepted answer. – Chiraz BenAbdelkader Dec 23 '19 at 08:15
  • 1
    Yes. IMHO, this is a better answer than the accepted answer. – dami.max Mar 16 '20 at 15:51
  • 1
    Yup . This answer definitely explains better why one_hot_encoder might be better along with a clear example – Binod Mathews Sep 04 '20 at 07:03
  • Additional note; there are many other encoders in sklearn. When to use which, depends upon the data. https://stackoverflow.com/a/63822728/5114585 might help you understand some common encoder's uses – Dr Nisha Arora Sep 21 '20 at 14:10
  • What about if the categorical variables one needs to transform contain missing values? – Ernesto Lopez Fune Jan 12 '21 at 16:21
  • @ErnestoLopezFune I think we can handle the missing value of categorical before the OneHotEncoder by imputation or with pipelines easiest way. # Preprocessing for categorical data categorical_transformer = Pipeline(steps=[ ('imputer', SimpleImputer(strategy='most_frequent')), ('onehot', OneHotEncoder(handle_unknown='ignore')) ]) It will replace with the "most_frequent" value. – code_conundrum Sep 07 '21 at 11:30
71

OneHotEncoder cannot process string values directly. If your nominal features are strings, then you need to first map them into integers.

pandas.get_dummies is kind of the opposite. By default, it only converts string columns into one-hot representation, unless columns are specified.

nos
  • 19,875
  • 27
  • 98
  • 134
9

I really like Carl's answer and upvoted it. I will just expand Carl's example a bit so that more people hopefully will appreciate that pd.get_dummies can handle unknown. The two examples below shows that pd.get_dummies can accomplish the same thing in handling unknown as OHE .

# data is from @dzieciou's comment above
>>> data =pd.DataFrame(pd.Series(['good','bad','worst','good', 'good', 'bad']))
# new_data has two values that data does not have. 
>>> new_data= pd.DataFrame(
pd.Series(['good','bad','worst','good', 'good', 'bad','excellent', 'perfect']))

Using pd.get_dummies

>>> df = pd.get_dummies(data)
>>> col_list = df.columns.tolist()
>>> print(df)
   0_bad  0_good  0_worst
0      0       1        0
1      1       0        0
2      0       0        1
3      0       1        0
4      0       1        0
5      1       0        0
6      0       0        0
7      0       0        0

>>> new_df = pd.get_dummies(new_data)
# handle unknow by using .reindex and .fillna()
>>> new_df = new_df.reindex(columns=col_list).fillna(0.00)
>>> print(new_df)
#    0_bad  0_good  0_worst
# 0      0       1        0
# 1      1       0        0
# 2      0       0        1
# 3      0       1        0
# 4      0       1        0
# 5      1       0        0
# 6      0       0        0
# 7      0       0        0

Using OneHotEncoder

>>> encoder = OneHotEncoder(handle_unknown="ignore", sparse=False)
>>> encoder.fit(data)
>>> encoder.transform(new_data)
# array([[0., 1., 0.],
#        [1., 0., 0.],
#        [0., 0., 1.],
#        [0., 1., 0.],
#        [0., 1., 0.],
#        [1., 0., 0.],
#        [0., 0., 0.],
#        [0., 0., 0.]])
Sarah
  • 1,854
  • 17
  • 18
  • 1
    Can you please expand your answer to include an example with drop_first =True, and then also show new data that doesn't include the dropped value. – Mint Jan 02 '20 at 16:34
6

why wouldn't you just cache or save the columns as variable col_list from the resulting get_dummies then use pd.reindex to align the train vs test datasets.... example:

df = pd.get_dummies(data)
col_list = df.columns.tolist()

new_df = pd.get_dummies(new_data)
new_df = new_df.reindex(columns=col_list).fillna(0.00) 
Carl
  • 69
  • 1
  • 1
  • How does this answer the question? – gosuto Nov 30 '19 at 08:43
  • more to refute the previous comment that Sklearn OHE is supperior because of handle_unknown. The same can be accomplished using pandas reindex. – Carl Dec 18 '19 at 17:49
  • There can be a sneaky problem with using get_dummies except as a one off run. What happens if you have drop_first=True and the next sample doesn't include the dropped value? – Mint Jan 02 '20 at 16:32
0

This question was asked long ago, but is still relevant in 2023.

In one sentence: Both can be used for the task, which one to choose depends on personal preference and other circumstances.

In a bit more detail:

  • For both OneHotEncoder and get_dummies it is possible and the most robust way to explicitly specify the categories. For OneHotEncoder this can be achieved using the "categories" parameter, which is a list-of-lists. For get_dummies you need to convert the relevant columns to categorical with the appropriate categories.

  • OneHotEncoder assumes you want to encode all columns in your data, so if it is not the case you have to either manually select/transform/join-with-original-columns or wrap the OneHotEncoder in a column transformer. This is much easier using get_dummies.

  • If you like to stay in DataFrame space during your data processing pipeline, then pandas.get_dummies is the most direct way, but if you rely on scikit Pipeline-s then OneHotEncoder wrapped in a column transformer is more straightforward.

For a full explanation with examples read my article on towards data science.

g.a
  • 318
  • 2
  • 10