4

My data has the following shape:

id   column1   column2
a    x         1
a    x         3
a    y         3
b    y         1
b    y         2

And I want to get to most repeated value for each id as well as its frequency percentage.

id   column1  %     column2  %
a    x        66.6  3        66.6
b    y        100.0 N/A      N/A

a special case is when there are equal frequencies, I output N/A for both column and percentage.

Right now my solution is purely using python dictionaries and lists. However, I am struggling to approach this from a DataFrame point of view

ooo
  • 673
  • 3
  • 16
  • 1
    Possible duplicate of [count the frequency that a value occurs in a dataframe column](https://stackoverflow.com/questions/22391433/count-the-frequency-that-a-value-occurs-in-a-dataframe-column) – marcogemaque Feb 11 '19 at 20:11

4 Answers4

4

I can only think of for loop then concat

g=df.groupby('id')
pd.concat([ g[x].value_counts(normalize=True).groupby(level=0).head(1).to_frame('%').reset_index(level=1) for x in df.columns[1:]],axis=1)
Out[135]: 
   column1         %  column2         %
id                                     
a        x  0.666667        3  0.666667
b        y  1.000000        1  0.500000
BENY
  • 317,841
  • 20
  • 164
  • 234
3

A (very) similar solution to @Wen, but accounts for the condition where the ratios of a group are the same, and the result should be NaN:

u = df.groupby('id')
c = ('column1', 'column2')

def helper(group, col):
    return (group[col].value_counts(normalize=True, sort=True)
            .drop_duplicates(keep=False)
            .groupby(level=0).head(1)
            .to_frame(f'{col}_%')
            .reset_index(level=1))

pd.concat([helper(u, col) for col in c], axis=1)

  column1  column1_%  column2  column2_%
a       x   0.666667      3.0   0.666667
b       y   1.000000      NaN        NaN
user3483203
  • 50,081
  • 9
  • 65
  • 94
  • Thank you for your reply. I would add ``dropna=False`` to value_counts() and voila! – ooo Feb 11 '19 at 21:28
2

For what it's worth...
This feels more natural to me:

s = pd.Series(
    Counter([
        (i, c, v) for (i, c), v in df.set_index('id').stack().items()
    ])
)

d = s.groupby(level=[0, 1]).pipe(lambda g: [*zip(g.idxmax(), g.max() / g.sum())])

a = {}

for ((i, col, var), val) in d:
  a[(i, col, 'var')] = var
  a[(i, col, 'val')] = val

pd.Series(a).unstack([1, 2])

    column1       column2    
        val var       val var
a  0.666667   x  0.666667   3
b         1   y       0.5   1
piRSquared
  • 285,575
  • 57
  • 475
  • 624
0

Using apply

import pandas as pd
from collections import Counter
df=pd.DataFrame({'id':['a','a','a','b','b'],'column1':['x','x','y','y','y'],'column2':[1,3,3,1,2]})


def get_max(row):
    tem_dict=Counter(row)
    return(tem_dict.most_common()[0][0], float(tem_dict.most_common()[0][1])/sum(tem_dict.values()))

pd.concat([pd.DataFrame(df.groupby('id')['column1'].apply(get_max).tolist(),columns=['Column1','%']),
           pd.DataFrame(df.groupby('id')['column2'].apply(get_max).tolist(),columns=['Column2','%'])],axis=1)
mad_
  • 8,121
  • 2
  • 25
  • 40