0

I have 185 million samples that will be about 3.8 MB per sample. To prepare my dataset, I will need to one-hot encode many of the features after which I end up with over 15,000 features.

But I need to prepare the dataset in batches since the memory footprint exceeds 100 GB for just the features alone when one hot encoding using only 3 million samples.

The question is how to preserve the encodings/mappings/labels between batches? The batches are not going to have all the levels of a category necessarily. That is, batch #1 may have: Paris, Tokyo, Rome.
Batch #2 may have Paris, London. But in the end I need to have Paris, Tokyo, Rome, London all mapped to one encoding all at once.

Assuming that I can not determine the levels of my Cities column of 185 million all at once since it won't fit in RAM, what should I do? If I apply the same Labelencoder instance to different batches will the mappings remain the same? I also will need to use one hot encoding either with scikitlearn or Keras' np_utilities_to_categorical in batches as well after this. So same question: how to basically use those three methods in batches or apply them at once to a file format stored on disk?

user798719
  • 9,619
  • 25
  • 84
  • 123

1 Answers1

2

I suggest using Pandas' get_dummies() for this, since sklearn's OneHotEncoder() needs to see all possible categorical values when .fit(), otherwise it will throw an error when it encounters a new one during .transform().

# Create toy dataset and split to batches
data_column = pd.Series(['Paris', 'Tokyo', 'Rome', 'London', 'Chicago', 'Paris'])
batch_1 = data_column[:3]
batch_2 = data_column[3:]

# Convert categorical feature column to matrix of dummy variables
batch_1_encoded = pd.get_dummies(batch_1, prefix='City')
batch_2_encoded = pd.get_dummies(batch_2, prefix='City')

# Row-bind (append) Encoded Data Back Together
final_encoded = pd.concat([batch_1_encoded, batch_2_encoded], axis=0)

# Final wrap-up. Replace nans with 0, and convert flags from float to int
final_encoded = final_encoded.fillna(0)
final_encoded[final_encoded.columns] = final_encoded[final_encoded.columns].astype(int)

final_encoded

output

   City_Chicago  City_London  City_Paris  City_Rome  City_Tokyo
0             0            0           1          0           0
1             0            0           0          0           1
2             0            0           0          1           0
3             0            1           0          0           0
4             1            0           0          0           0
5             0            0           1          0           0
Max Power
  • 8,265
  • 13
  • 50
  • 91
  • get_dummies is the exact same thing as one-hot encoding? I've read conflicting answers. I think they are the same thing, but just want to make sure. – user798719 May 15 '17 at 06:06
  • 2
    yeah one-hot-encoding is converting a categorical variable to a sparse matrix of dummy variables. Scikit-learn does this the scikit-learn way with a transformer which implements fit() and predict(), pandas has `get_dummies()` – Max Power May 15 '17 at 12:00