2

Consider this DataFrame given with many colums, but it has a feature defined in the column 'feature' and some values in the column 'values'.

I want in an extra column the relative values per feature (group) The desired result is manually precalculated by me in the column 'desired'

df = pd.DataFrame(
    data={
        'feature': [1, 1, 2, 3, 3, 3],
        'values': [30.0, 20.0, 25.0, 100.0, 250.0, 50.0],
        'desired': [0.6, 0.4, 1.0, 0.25, 0.625, 0.125],
        'more_columns': range(6),
    },
)

Which leads to the DataFrame

   feature  values  desired  more_columns
0        1    30.0    0.600             0
1        1    20.0    0.400             1
2        2    25.0    1.000             2
3        3   100.0    0.250             3
4        3   250.0    0.625             4
5        3    50.0    0.125             5

So for the group defined by feature 1 the desired values are 0.6 and 0.4 (because 0.6 = 30 / (20+30)) and so on.

I came to these values manually using

for feature, group in df.groupby('feature'):
    rel_values = (group['values'] / group['values'].sum()).values
    df[df['feature'] == feature]['result'] = rel_values  # no effect
    print(f'{feature}: {rel_values}')

# which prints:
1: [0.6 0.4]
2: [1.]
3: [0.25  0.625 0.125]

# but df remains unchanged

I believe that there must be a smart and fast way in pandas to accomplish this.

Nras
  • 4,251
  • 3
  • 25
  • 37

2 Answers2

5

Use GroupBy.transform for return Series with sumed values with same size as original df, so possible divide by div:

df['new'] = df['values'].div(df.groupby('feature')['values'].transform('sum'))
print (df)
   feature  values  desired  more_columns    new
0        1    30.0    0.600             0  0.600
1        1    20.0    0.400             1  0.400
2        2    25.0    1.000             2  1.000
3        3   100.0    0.250             3  0.250
4        3   250.0    0.625             4  0.625
5        3    50.0    0.125             5  0.125

Detail:

print (df.groupby('feature')['values'].transform('sum'))
0     50.0
1     50.0
2     25.0
3    400.0
4    400.0
5    400.0
Name: values, dtype: float64

Performance:

In real data depends by number of groups and length of DataFrame.

np.random.seed(123)
N = 1000000
L = np.random.randint(1000,size=N)
df = pd.DataFrame({'feature': np.random.choice(L, N),
                   'values':np.random.rand(N)})
#print (df)

In [272]: %timeit df['new'] = df['values'].div(df.groupby('feature')['values'].transform('sum'))
80.7 ms ± 2.78 ms per loop (mean ± std. dev. of 7 runs, 10 loops each)

In [273]: %timeit df['desired'] = df.groupby('feature').apply(lambda g: g['values'] / g['values'].sum()).values
1.17 s ± 23.9 ms per loop (mean ± std. dev. of 7 runs, 1 loop each)

In [274]: %timeit df['desired'] = df.groupby('feature')['values'].transform(lambda x: x / x.sum())
727 ms ± 14.4 ms per loop (mean ± std. dev. of 7 runs, 1 loop each)
jezrael
  • 822,522
  • 95
  • 1,334
  • 1,252
3

Method 1 : Using transform

df['desired'] = df.groupby('feature')['values'].transform(lambda x: x / x.sum())

Method 2 : Use apply

df['desired'] = df.groupby('feature').apply(lambda g: g['values'] / g['values'].sum()).values

Output:

    feature  values  desired  more_columns
0        1    30.0    0.600             0
1        1    20.0    0.400             1
2        2    25.0    1.000             2
3        3   100.0    0.250             3
4        3   250.0    0.625             4
5        3    50.0    0.125             5
harvpan
  • 8,571
  • 2
  • 18
  • 36