6

I have a dataframe with 2.7 million rows as you see below-

df
Out[10]: 
         ClaimId  ServiceSubCodeKey  ClaimRowNumber  SscRowNumber
0        1902659                183               1             1
1        1902659               2088               1             2
2        1902663               3274               2             1
3        1902674                 12               3             1
4        1902674                 23               3             2
         ...                ...             ...           ...
2793010  2563847               3109          603037             4
2793011  2563883               3109          603038             1
2793012  2564007               3626          603039             1
2793013  2564007               3628          603039             2
2793014  2564363               3109          603040             1

[2793015 rows x 4 columns]

I am trying to Hot Encode this in python below but I end up with a Memory error:

import pandas as pd

columns = (
    pd.get_dummies(df["ServiceSubCodeKey"])
    .reindex(range(df.ServiceSubCodeKey.min(),
        df.ServiceSubCodeKey.max()+1), axis=1, fill_value=0)
    # now it has all digits
    .astype(str)
    )
# this will create codes
codes_values = [int(''.join(r)) for r in columns.itertuples(index=False)]
codes = pd.Series({'test': codes_values}).explode()
codes.index = df.index

# groupby and aggregate the values into lists
dfg = codes.groupby(df.ClaimId).agg(list).reset_index()

# sum the lists; doing this with a pandas function also does not work, so no .sum or .apply
summed_lists = list()
for r, v in dfg.iterrows():
    summed_lists.append(str(sum(v[0])))

# assign the list of strings to a column
dfg['sums'] = summed_lists

# perform the remainder of the functions on the sums column
dfg['final'] = dfg.sums.str.pad(width=columns.shape[1], fillchar='0').str.rstrip('0')

# merge df and dfg.final
dfm = pd.merge(df, dfg[['ClaimId', 'final']], on='ClaimId')
dfm
  File "pandas/_libs/lib.pyx", line 574, in pandas._libs.lib.astype_str

MemoryError

How can I do this in automated batches so it doesnt give me a memory error?

1 Answers1

1
onehot = []
for groupi, group in df.groupby(df.index//1e5):
    # encode each group separately
    onehot.expand(group_onehot)
df = df.assign(onehot=onehot)

Will give you 28 groups to work on individually.

However, looking at your code, the line:

codes_values = [int(''.join(r)) for r in columns.itertuples(index=False)]

Is creating a string possibly up to 4k digits long and trying to create an integer in the range 10e4000, which will cause an overflow ( see https://numpy.org/devdocs/user/basics.types.html)

Edit

An alternative encoding method. Starting with this df:

df = pd.DataFrame({
    'ClaimId': [1902659, 1902659, 1902663, 1902674, 1902674, 2563847, 2563883,
        2564007, 2564007, 2564363],
    'ServiceSubCodeKey': [183, 2088, 3274, 12, 23, 3109, 3109, 3626, 3628, 3109]
    })

The code:

scale = df.ServiceSubCodeKey.max() + 1
onehot = []
for claimid, ssc in df.groupby('ClaimId').ServiceSubCodeKey:
    ssc_list = ssc.to_list()
    onehot.append([claimid,
        ''.join(['1' if i in ssc_list else '0' for i in range(1, scale)])])
onehot = pd.DataFrame(onehot, columns=['ClaimId', 'onehot'])
print(onehot)

Output

   ClaimId                                             onehot
0  1902659  0000000000000000000000000000000000000000000000...
1  1902663  0000000000000000000000000000000000000000000000...
2  1902674  0000000000010000000000100000000000000000000000...
3  2563847  0000000000000000000000000000000000000000000000...
4  2563883  0000000000000000000000000000000000000000000000...
5  2564007  0000000000000000000000000000000000000000000000...
6  2564363  0000000000000000000000000000000000000000000000...

This fixes the overflow issue in your method and avoids calling pd.get_dummies() to create a 600K x 4K dummy dataframe, with the handicap of iterating a grouped series and building a list-comprehension on every group (neither take advantage of built-in C implementations from pandas).

From here you can:

  • Recomended: carry on keeping a summary of one-hot encodings per ClaimId, or
  • What you ask for: merge into df as you want, duplicating the same encoding as many times as ClaimId is duplicated in df

with

df = df.merge(onehot, on='ClaimId')

Output

   ClaimId  ServiceSubCodeKey                                             onehot
0  1902659                183  0000000000000000000000000000000000000000000000...
1  1902659               2088  0000000000000000000000000000000000000000000000...
2  1902663               3274  0000000000000000000000000000000000000000000000...
3  1902674                 12  0000000000010000000000100000000000000000000000...
4  1902674                 23  0000000000010000000000100000000000000000000000...
5  2563847               3109  0000000000000000000000000000000000000000000000...
6  2563883               3109  0000000000000000000000000000000000000000000000...
7  2564007               3626  0000000000000000000000000000000000000000000000...
8  2564007               3628  0000000000000000000000000000000000000000000000...
9  2564363               3109  0000000000000000000000000000000000000000000000...
RichieV
  • 5,103
  • 2
  • 11
  • 24