I have the following sample dataframe:
No category problem_definition_stopwords
175 2521 ['coffee', 'maker', 'brewing', 'properly', '2', '420', '420', '420']
211 1438 ['galley', 'work', 'table', 'stuck']
912 2698 ['cloth', 'stuck']
572 2521 ['stuck', 'coffee']
The 'problem_definition_stopwords' field has already been tokenized with stop gap words removed.
I want to create n-grams from the 'problem_definition_stopwords' field. Specifically, I want to extract n-grams from my data and find the ones that have the highest point wise mutual information (PMI).
Essentially I want to find the words that co-occur together much more than I would expect them to by chance.
I tried the following code:
import nltk
from nltk.collocations import *
bigram_measures = nltk.collocations.BigramAssocMeasures()
trigram_measures = nltk.collocations.TrigramAssocMeasures()
# errored out here
finder = BigramCollocationFinder.from_words(nltk.corpus.genesis.words(df['problem_definition_stopwords']))
# only bigrams that appear 3+ times
finder.apply_freq_filter(3)
# return the 10 n-grams with the highest PMI
finder.nbest(bigram_measures.pmi, 10)
The error I received was on the third chunk of code ... TypeError: join() argument must be str or bytes, not 'list'
Edit: a more portable format for the DataFrame:
>>> df.columns
Index(['No', 'category', 'problem_definition_stopwords'], dtype='object')
>>> df.to_dict()
{'No': {0: 175, 1: 211, 2: 912, 3: 572}, 'category': {0: 2521, 1: 1438, 2: 2698, 3: 2521}, 'problem_definition_stopwords': {0: ['coffee', 'maker', 'brewing', 'properly', '2', '420', '420', '420'], 1: ['galley', 'work', 'table', 'stuck'], 2: ['cloth', 'stuck'], 3: ['stuck', 'coffee']}}