0

I have a DataFrame:

df = pd.DataFrame({
'keywords': [['a', 'b', 'c'], ['c', 'd'], ['a', 'b', 'c', 'd'], ['b', 'c', 'g', 'h', 'i']]})

I want to count the number of elements in the DataFrame inside the lists across all rows using df.apply. I expect the above DataFrame to give:

a: 2
b: 3
c: 4
d: 2
g: 1
h: 1
i: 1

2 Answers2

2

First, note that you can use "sum" to concatenate lists, because + concatenates lists in Python:

df.keywords.sum()
# out: ['a', 'b', 'c', 'c', 'd', 'a', 'b', 'c', 'd', 'b', 'c', 'g', 'h', 'i']

Then either:

import collections
collections.Counter(df.keywords.sum())
# out: Counter({'a': 2, 'b': 3, 'c': 4, 'd': 2, 'g': 1, 'h': 1, 'i': 1})

Or:

np.unique(df.keywords.sum(), return_counts=True)
# out: (array(['a', 'b', 'c', 'd', 'g', 'h', 'i'], dtype='<U1'),  array([2, 3, 4, 2, 1, 1, 1]))

Or:

uniq = np.unique(df.keywords.sum(), return_counts=True)
pd.Series(uniq[1], uniq[0])
# out:
a    2
b    3
c    4
d    2
g    1
h    1
i    1

Or:

pd.Series(collections.Counter(df.keywords.sum()))
# out: same as previous

Performance wise it's about the same whether you use np.unique() or collections.Counter, because df.keywords.sum() is actually not so fast. If you care about performance, a pure Python list flattening is much faster:

collections.Counter([item for sublist in df.keywords for item in sublist])
John Zwinck
  • 239,568
  • 38
  • 324
  • 436
1

You can use pure python solution for flatten with chain if performance is important and count values by Counter, last use DataFrame constructor:

from itertools import chain
from collections import Counter

c = Counter(chain.from_iterable(df['keywords'].tolist())) 

df = pd.DataFrame({'a': list(c.keys()), 'b':list(c.values())})
print (df)
   a  b
0  a  2
1  b  3
2  c  4
3  d  2
4  g  1
5  h  1
6  i  1

Or:

df = pd.DataFrame(df['keywords'].values.tolist()).stack().value_counts().to_frame('a')
print (df)
   a
c  4
b  3
a  2
d  2
g  1
i  1
h  1
jezrael
  • 822,522
  • 95
  • 1,334
  • 1,252