tokens = [The, wage, productivity, nexus, the, process, of, development,....]
I am trying to convert a list of tokens into their lemmatized form using SpaCy's Lemmatizer. Here is the documentation I am using.
My code:
from spacy.lemmatizer import Lemmatizer
from spacy.lookups import Lookups
lookups = Lookups()
lookups.add_table("lemma_rules")
lemmatizer = Lemmatizer(lookups)
lemmas = []
for tokens in filtered_tokens:
lemmas.append(lemmatizer(tokens))
error message
TypeError Traceback (most recent call last)
in
7 lemmas = []
8 for tokens in filtered_tokens:
----> 9 lemmas.append(lemmatizer(tokens))
TypeError: __call__() missing 1 required positional argument: 'univ_pos'
I understood in this discussion how SpaCy's Lemmatizer works and understand it in theory. However, I am not sure how I can implement this.
How can I find out the univ_pos
for my tokens?