1

Assuming that there is a pandas data frame with the rows containing some grouped data that are sorted (all the groups of values for a given name are appearing next to each other), we would like to introduce a new calculated column that assigns values depending on the values of some column. If the first value is zero, then all the values for a group get the first non-zero value or nan, if there is no such value. Otherwise, if the first value is non-zero, then a fixed value is assigned, for example -1.

Example input data frame:

   name    value
0     a        0
1     a        0
2     a        6
3     a        8
4     b        0
5     b        0
6     c        5
7     c        7

Example output data frame with the calc column created.

   name    value    calc
0     a        0       6
1     a        0       6      
2     a        6       6
3     a        8       6
4     b        0     nan
5     b        0     nan
6     c        5      -1
7     c        7      -1

The approach that I was thinking about was to create a lookup table of first non-zero values of each group, so for the example above it would be:

      value
 a        6
 c        5

And then iterate the input data frame and construct the list of values following the logic above that would be then assigned to the new column.

Krzysztof Słowiński
  • 6,239
  • 8
  • 44
  • 62

2 Answers2

1

For better performance is best dont use groupby, better is create final dictionary and map:

#get all names with 0
contains_zeros = df.loc[df['value'] == 0, 'name'].unique()
print (contains_zeros)
['a' 'b']

#get first non zero values only names with 0
s = df[df['name'].isin(contains_zeros) & (df['value'] != 0)].drop_duplicates('name')
print (s)
  name  value
2    a      6

#first non zero dictionary
d1 = s.set_index('name')['value'].to_dict()
print (d1)
{'a': 6}

#dictionary with all 0 in name
d2 = dict.fromkeys(set(contains_zeros) - set(s['name']), np.nan)
print (d2)
{'b': nan}

#all dictionary without 0
d3 = dict.fromkeys(set(df['name'].unique()) - set(contains_zeros), -1)
print (d3)
{'c': -1}

#merge all together
#https://stackoverflow.com/q/38987
d =  {**d1, **d2, **d3}
print (d)
{'a': 6, 'b': nan, 'c': -1}

df['calc'] = df['name'].map(d)
print (df)
  name  value  calc
0    a      0   6.0
1    a      0   6.0
2    a      6   6.0
3    a      8   6.0
4    b      0   NaN
5    b      0   NaN
6    c      5  -1.0
7    c      7  -1.0

Another slowier solution with groupby:

def f(x):
    if (x== 0).all():
        return np.nan
    elif (x == 0).any():
        return x[x != 0].iloc[0]
    else:
        return -1


df['calc'] = df.groupby('name')['value'].transform(f)
print (df)

  name  value  calc
0    a      0   6.0
1    a      0   6.0
2    a      6   6.0
3    a      8   6.0
4    b      0   NaN
5    b      0   NaN
6    c      5  -1.0
7    c      7  -1.0
jezrael
  • 822,522
  • 95
  • 1,334
  • 1,252
1

This is one way using groupby.transform. The alternatives in function return_val reflect directly the 3 conditions you specify, and is easily extendible to further criteria.

def return_val(x):
    vals = x.values
    if 0 not in vals:
        return -1
    else:
        return next((i for i in vals if i!=0), np.nan)

df['calc'] = df.groupby('name')['value'].transform(return_val)

print(df)

  name  value  calc
0    a      0   6.0
1    a      0   6.0
2    a      6   6.0
3    a      8   6.0
4    b      0   NaN
5    b      0   NaN
6    c      5  -1.0
7    c      7  -1.0
jpp
  • 159,742
  • 34
  • 281
  • 339