4

Is there any way of making a sum on the columns after grouping in pandas data frame? For example I have the following data frame:

ID   W_1       W_2     W_3 
1    0.1       0.2     0.3
1    0.2       0.4     0.5
2    0.3       0.3     0.2
2    0.1       0.3     0.4
2    0.2       0.0     0.5
1    0.5       0.3     0.2
1    0.4       0.2     0.1

I want to have an extra column called "my_sum" that sums the first row in all columns (W_1, W_2, W_3). The output would be something like this:

ID   W_1       W_2     W_3     my_sum
1    0.1       0.2     0.3      0.6
1    0.2       0.4     0.5      1.1
2    0.3       0.3     0.2      0.8
2    0.1       0.3     0.4      0.8
2    0.2       0.0     0.5      0.7
1    0.5       0.3     0.2      1.0
1    0.4       0.2     0.1      0.7

I tred the following:

df['my_sum'] =   df.groupby('ID')['W_1','W_1','W_1'].transform(sum,axis=1)

but this sums all entries of just W_1. The documentation mentions the axis parmeter, but I am not sure why it is not effective.

I looked into this question and also this, but they are different from what I want.

cs95
  • 379,657
  • 97
  • 704
  • 746
owise
  • 1,055
  • 16
  • 28

4 Answers4

9

The thing that remains the same is the .sum(1). Here are some inventive alternatives to answers already posted.


df.select_dtypes

df['my_sum'] = df.select_dtypes(float).sum(1)
df
   ID  W_1  W_2  W_3  my_sum
0   1  0.1  0.2  0.3     0.6
1   1  0.2  0.4  0.5     1.1
2   2  0.3  0.3  0.2     0.8
3   2  0.1  0.3  0.4     0.8
4   2  0.2  0.0  0.5     0.7
5   1  0.5  0.3  0.2     1.0
6   1  0.4  0.2  0.1     0.7

df.iloc

df['my_sum'] = df.iloc[:, 1:].sum(1)
df
   ID  W_1  W_2  W_3  my_sum
0   1  0.1  0.2  0.3     0.6
1   1  0.2  0.4  0.5     1.1
2   2  0.3  0.3  0.2     0.8
3   2  0.1  0.3  0.4     0.8
4   2  0.2  0.0  0.5     0.7
5   1  0.5  0.3  0.2     1.0
6   1  0.4  0.2  0.1     0.7

Boolean Indexing

This is obnoxiously exploiting your data.

df['my_sum'] = df[df < 1].sum(1)
df
   ID  W_1  W_2  W_3  my_sum
0   1  0.1  0.2  0.3     0.6
1   1  0.2  0.4  0.5     1.1
2   2  0.3  0.3  0.2     0.8
3   2  0.1  0.3  0.4     0.8
4   2  0.2  0.0  0.5     0.7
5   1  0.5  0.3  0.2     1.0
6   1  0.4  0.2  0.1     0.7

DataFrame.sum, or numpy.sum

Filter on the column names using str.contains:

df.iloc[:, df.columns.str.contains('W_')].sum(1)
df
   ID  W_1  W_2  W_3  my_sum
0   1  0.1  0.2  0.3     0.6
1   1  0.2  0.4  0.5     1.1
2   2  0.3  0.3  0.2     0.8
3   2  0.1  0.3  0.4     0.8
4   2  0.2  0.0  0.5     0.7
5   1  0.5  0.3  0.2     1.0
6   1  0.4  0.2  0.1     0.7

Alternatively try summing on a numpy array directly, for performance:

df['my_sum'] = df.values[:, 1:].sum(1)
df
   ID  W_1  W_2  W_3  my_sum
0   1  0.1  0.2  0.3     0.6
1   1  0.2  0.4  0.5     1.1
2   2  0.3  0.3  0.2     0.8
3   2  0.1  0.3  0.4     0.8
4   2  0.2  0.0  0.5     0.7
5   1  0.5  0.3  0.2     1.0
6   1  0.4  0.2  0.1     0.7
Community
  • 1
  • 1
cs95
  • 379,657
  • 97
  • 704
  • 746
4
In [7]: df['my_sum'] = df.drop('ID',1).sum(axis=1)

In [8]: df
Out[8]:
   ID  W_1  W_2  W_3  my_sum
0   1  0.1  0.2  0.3     0.6
1   1  0.2  0.4  0.5     1.1
2   2  0.3  0.3  0.2     0.8
3   2  0.1  0.3  0.4     0.8
4   2  0.2  0.0  0.5     0.7
5   1  0.5  0.3  0.2     1.0
6   1  0.4  0.2  0.1     0.7

or:

In [9]: df['my_sum'] = df.filter(regex='^W_\d+').sum(axis=1)

In [10]: df
Out[10]:
   ID  W_1  W_2  W_3  my_sum
0   1  0.1  0.2  0.3     0.6
1   1  0.2  0.4  0.5     1.1
2   2  0.3  0.3  0.2     0.8
3   2  0.1  0.3  0.4     0.8
4   2  0.2  0.0  0.5     0.7
5   1  0.5  0.3  0.2     1.0
6   1  0.4  0.2  0.1     0.7
MaxU - stand with Ukraine
  • 205,989
  • 36
  • 386
  • 419
4

You don't need to group by anything if you just want to sum across rows. Just use axis=1 in your sum.

The crux is figuring out how to identify which columns to sum over. In your case, we can pick the appropriate columns in many ways. We don't actually know what your "real" data looks like.


@MaxU covered the more practical solutions. This one should be fast.

df.assign(
    my_sum=np.column_stack([df[c].values for c in df if c.startswith('W_')]).sum(1)
)

   ID  W_1  W_2  W_3  my_sum
0   1  0.1  0.2  0.3     0.6
1   1  0.2  0.4  0.5     1.1
2   2  0.3  0.3  0.2     0.8
3   2  0.1  0.3  0.4     0.8
4   2  0.2  0.0  0.5     0.7
5   1  0.5  0.3  0.2     1.0
6   1  0.4  0.2  0.1     0.7

Or if it really is just ['W_1', 'W_2', 'W_3']

df.assign(my_sum=df[['W_1', 'W_2', 'W_3']].sum(1))

   ID  W_1  W_2  W_3  my_sum
0   1  0.1  0.2  0.3     0.6
1   1  0.2  0.4  0.5     1.1
2   2  0.3  0.3  0.2     0.8
3   2  0.1  0.3  0.4     0.8
4   2  0.2  0.0  0.5     0.7
5   1  0.5  0.3  0.2     1.0
6   1  0.4  0.2  0.1     0.7
piRSquared
  • 285,575
  • 57
  • 475
  • 624
1

Also, you can pass a list to the dataframe, indicating which columns to sum over. This is helpful because columns can be readily put in list form.

sum_list = ['W_1', 'W_2', 'W_3']
df['my_sum'] = df[sum_list].sum(1)
sameagol
  • 613
  • 1
  • 8
  • 16
  • This is not a real answer. While this may, or may not, solve the issue described in the OP, you need to add more context and explanation on what caused the original issue and how your solution fixes it. – GMB Dec 10 '18 at 23:36