2

I'm writing a dissertation, and using nltk.pos_tagger in my work. I can't find any information about what the accuracy of this algorithm. Does anybody know where can I find such information?

Vit D
  • 193
  • 1
  • 7
  • 25
  • I don't think you can get the accuracy score anywhere really. Like most NLP tools, this is very application-specific. Depends on how many ambiguous words you've got, whether you have ground truth to evaluate the model, etc. I would design your dissertation the way that you can calculate precision and recall in your specific case. Say, use Mechanical Turk to generate human-tagged data from your corpus and then evaluate. – Everst Aug 04 '14 at 00:48

1 Answers1

3

NLTK default pos tagger pos_tag is a MaxEnt tagger, see line 82 from https://github.com/nltk/nltk/blob/develop/nltk/tag/init.py

from nltk.corpus import brown
from nltk.data import load

sents = brown.tagged_sents()
# test on last 10% of brown corpus.
numtest = len(sents) / 10
testsents = sents[numtest:]

_POS_TAGGER = 'taggers/maxent_treebank_pos_tagger/english.pickle'

tagger = load(_POS_TAGGER)

print tagger.evaluate(testsents)

[out]:

alvas
  • 115,346
  • 109
  • 446
  • 738
  • 10
    I think you forgot to paste the output. – mbatchkarov Aug 04 '14 at 09:19
  • 2
    And how using `MaxEnt tagger` is the answer to the accuracy of it? – Maziyar Dec 07 '18 at 10:43
  • accuracy I trained several taggers on the WSJ corpus (90% training / 10% test data). nltk-maxent-pos-tagger achieved an accuracy of 93.64% (100 iterations, rare feature cutoff = 5) while MXPOST reached 96.93% (100 iterations). Since both implementations use the same feature set, results shouldn't be that different. Unfortunately, there's no source code available for MXPOST, but comparing nltk-maxent-pos-tagger with OpenNLP's implementation should be helpful. Link : https://github.com/arne-cl/nltk-maxent-pos-tagger#todo – Virus Aug 12 '21 at 10:34