2

I'm normalizing some data in Pandas and the interrows takes a long time. The math seems relatively easy though and there are only ~2500 rows. Is there a faster way to do this?

As you can see below I've manually done the normalizing.

# normalize the rating columns to values between 0 and 1
df_1['numerator_norm'] = ((df_1['rating_numerator']- df_1['rating_numerator'].min())/(df_1['rating_numerator'].max()- df_1['rating_numerator'].min()))
df_1['denominator_norm'] = ((df_1['rating_denominator']- df_1['rating_denominator'].min())/(df_1['rating_denominator'].max()- df_1['rating_denominator'].min()))
df_1['normalized_rating'] = np.nan

for index, row in df_1.iterrows():
    df_1['normalized_rating'][index] = (df_1['numerator_norm'][index] / df_1['denominator_norm'][index])

It would be nice to have this process in only a few seconds instead of ~60 seconds

Chris Macaluso
  • 1,372
  • 2
  • 14
  • 33

1 Answers1

6

Change:

for index, row in df_1.iterrows():
    df_1['normalized_rating'][index] = (df_1['numerator_norm'][index] / 
df_1['denominator_norm'][index])

to:

df_1['normalized_rating'] = df_1['numerator_norm'] / df_1['denominator_norm']

for vectorized division.

Iterrows is best avoid, check Does iterrows have performance issues?

jezrael
  • 822,522
  • 95
  • 1,334
  • 1,252