40

Let's say that I have some data generated as follows:

N = 20
m = 3
data = np.random.normal(size=(N,m)) + np.random.normal(size=(N,m))**3

and then I create some categorization variable:

indx = np.random.randint(0,3,size=N).astype(np.int32)

and generate a DataFrame:

import pandas as pd
df = pd.DataFrame(np.hstack((data, indx[:,None])), 
             columns=['a%s' % k for k in range(m)] + [ 'indx'])

I can get the mean value, per group as:

df.groubpy('indx').mean()

What I'm unsure of how to do is to then subtract the mean off of each group, per-column in the original data, so that the data in each column is normalized by the mean within group. Any suggestions would be appreciated.

JoshAdel
  • 66,734
  • 27
  • 141
  • 140

4 Answers4

84
In [10]: df.groupby('indx').transform(lambda x: (x - x.mean()) / x.std())

should do it.

Cyrus
  • 1,216
  • 1
  • 8
  • 12
TomAugspurger
  • 28,234
  • 8
  • 86
  • 69
33

If the data contains many groups (thousands or more), the accepted answer using a lambda may take a very long time to compute. A fast solution would be:

groups = df.groupby("indx")
mean, std = groups.transform("mean"), groups.transform("std")
normalized = (df[mean.columns] - mean) / std

Explanation and benchmarking

The accepted answer suffers from a performance problem using apply with a lambda. Even though groupby.transform itself is fast, as are the already vectorized calls in the lambda function (.mean(), .std() and the subtraction), the call to the pure Python lambda function itself for each group creates a considerable overhead.

This can be avoided by using pure vectorized Pandas/Numpy calls and not writing any Python method, as shown in ErnestScribbler's answer.

We can get around the headache of merging and naming the columns by leveraging the broadcasting abilities of .transform. Let's put the solution from above into a method for benchmarking:

def normalize_by_group(df, by):
    groups = df.groupby(by)
    # computes group-wise mean/std,
    # then auto broadcasts to size of group chunk
    mean = groups.transform("mean")
    std = groups.transform("std")
    normalized = (df[mean.columns] - mean) / std
    return normalized

I changed the data generation from the original question to allow for more groups:

def gen_data(N, num_groups):
    m = 3
    data = np.random.normal(size=(N,m)) + np.random.normal(size=(N,m))**3
    indx = np.random.randint(0,num_groups,size=N).astype(np.int32)

    df = pd.DataFrame(np.hstack((data, indx[:,None])), 
                      columns=['a%s' % k for k in range(m)] + [ 'indx'])
    return df

With only two groups (thus only two Python function calls), the lambda version is only about 1.8x slower than the numpy code:

In: df2g = gen_data(10000, 2)  # 3 cols, 10000 rows, 2 groups

In: %timeit normalize_by_group(df2g, "indx")
6.61 ms ± 72.8 µs per loop (mean ± std. dev. of 7 runs, 100 loops each)

In: %timeit df2g.groupby('indx').transform(lambda x: (x - x.mean()) / x.std())
12.3 ms ± 130 µs per loop (mean ± std. dev. of 7 runs, 100 loops each)

Increasing the number of groups to 1000, and the runtime issue becomes apparent. The lambda version is 370x slower than the numpy code:

In: df1000g = gen_data(10000, 1000)  # 3 cols, 10000 rows, 1000 groups

In: %timeit normalize_by_group(df1000g, "indx")
7.5 ms ± 87.1 µs per loop (mean ± std. dev. of 7 runs, 100 loops each)

In: %timeit df1000g.groupby('indx').transform(lambda x: (x - x.mean()) / x.std())
2.78 s ± 13.6 ms per loop (mean ± std. dev. of 7 runs, 1 loop each)
w-m
  • 10,772
  • 1
  • 42
  • 49
  • are there any circumstances where this translates into a delay that would really matter? I'm not too familiar with these operations so I'm actually curious if there is a scenario where this would be a tens of seconds difference between operations solely caused by the extra function calls. 13 ms is hardly noticeable in real operation and a price I would pay every time for the code cleanliness, I suppose if you are daisy chaining these operations together and doing this repeatedly it could add up, but would that be extreme or normal? – Vince W. Jun 25 '18 at 17:49
  • 1
    Yes, there are circumstances where it's noticeable! It depends on the data of course. My dataset contains roughly 1.6 million groups. The lambda approach would run for over an hour. That's why I wrote this, it finishes in a couple of seconds even for the 1.6m groups. I completely agree with you - if you only have a handful of groups, go for the clean & simple version, a few milliseconds won't matter. – w-m Jun 25 '18 at 21:16
  • fascinating, thanks for the reply. I can see why 1.6 million groups would be an impetus to look for a faster way. – Vince W. Jun 25 '18 at 21:18
4

The accepted answer works and is elegant. Unfortunately, for large datasets I think performance-wise using .transform() is much much slower than doing the less elegant following (illustrated with a single column 'a0'):

means_stds = df.groupby('indx')['a0'].agg(['mean','std']).reset_index()
df = df.merge(means_stds,on='indx')
df['a0_normalized'] = (df['a0'] - df['mean']) / df['std']

To do it for multiple columns you'll have to figure out the merge. My suggestion would be to flatten the multiindex columns from aggregation as in this answer and then merge and normalize for each column separately:

means_stds = df.groupby('indx')[['a0','a1']].agg(['mean','std']).reset_index()
means_stds.columns = ['%s%s' % (a, '|%s' % b if b else '') for a, b in means_stds.columns]
df = df.merge(means_stds,on='indx')
for col in ['a0','a1']:
    df[col+'_normalized'] = ( df[col] - df[col+'|mean'] ) / df[col+'|std']
ErnestScribbler
  • 2,667
  • 1
  • 18
  • 13
1

Although this is not the prettiest solution, you could do something like this:

indx = df['indx'].copy()
for indices in df.groupby('indx').groups.values():
    df.loc[indices] -= df.loc[indices].mean()
df['indx'] = indx
Mike
  • 6,813
  • 4
  • 29
  • 50