This thread contains a nice example on how to use a wrapper for Stanfords CoreNLP library. Here is the exmaple I am using:
from pycorenlp import StanfordCoreNLP
nlp = StanfordCoreNLP('http://localhost:9000')
res = nlp.annotate("I love you. I hate him. You are nice. He is dumb",
properties={
'annotators': 'sentiment',
'outputFormat': 'json',
'timeout': 1000,
})
for s in res["sentences"]:
print("%d: '%s': %s %s" % (
s["index"],
" ".join([t["word"] for t in s["tokens"]]),
s["sentimentValue"], s["sentiment"]))
Say I have +10000 sentences that I want to analyze like in this example. Is it possible to process these in parallel and multithread it?