I want to calculate the term-frequencies of words in a text corpus. I've been using NLTK's word_tokenize followed by probability.FreqDist for some time to get this done. The word_tokenize returns a list, which is converted to a frequency distribution by FreqDist. However, I recently came across the Counter function in collections (collections.Counter), which seems to be doing the exact same thing. Both FreqDist and Counter have a most_common(n) function which return the n most common words. Does anyone know if there's a difference between these two? Is one faster than the other? Are there cases where one would work and the other wouldn't?
-
You could test the speed on a large corpus(?) Using ```timeit```. ```collections.counter``` will only give you the total count for each word, not a frequency distribution. Play around with it and see if it suits your needs. – wwii Jan 05 '16 at 04:42
1 Answers
nltk.probability.FreqDist
is a subclass of collections.Counter
.
From the docs:
A frequency distribution for the outcomes of an experiment. A frequency distribution records the number of times each outcome of an experiment has occurred. For example, a frequency distribution could be used to record the frequency of each word type in a document. Formally, a frequency distribution can be defined as a function mapping from each sample to the number of times that sample occurred as an outcome.
The inheritance is explicitly shown from the code and essentially, there's no difference in terms of how a Counter
and FreqDist
is initialized, see https://github.com/nltk/nltk/blob/develop/nltk/probability.py#L106
So speed-wise, creating a Counter
and FreqDist
should be the same. The difference in speed should be insignificant but it's good to note that the overheads could be:
- the compilation of the class in when defining it in an interpreter
- the cost of duck-typing
.__init__()
The major difference is the various functions that FreqDist
provides for statistical / probabilistic Natural Language Processing (NLP), e.g. finding hapaxes. The full list of functions that FreqDist
extends Counter
are as followed:
>>> from collections import Counter
>>> from nltk import FreqDist
>>> x = FreqDist()
>>> y = Counter()
>>> set(dir(x)).difference(set(dir(y)))
set(['plot', 'hapaxes', '_cumulative_frequencies', 'r_Nr', 'pprint', 'N', 'unicode_repr', 'B', 'tabulate', 'pformat', 'max', 'Nr', 'freq', '__unicode__'])
When it comes to using FreqDist.most_common()
, it's actually using the parent function from Counter
so the speed of retrieving the sorted most_common
list is the same for both types.
Personally, when I just want to retrieve counts, I use collections.Counter
. But when I need to do some statistical manipulation, I either use nltk.FreqDist
or I would dump the Counter
into a pandas.DataFrame
(see Transform a Counter object into a Pandas DataFrame).