I am working on NLP project based on stanford Coe NLP.I have got the sentiment corresponding to each text/sentence-thanks to the document here Now I need to get the list of nouns and related adjectives for each text in data frame column,there by I can bucket them based on adjective sentiment.I tried the stanford website,but didnt find any sample function /code to achieve this.can I get a help on this please. TIA
For my text_df dataframe,
with CoreNLPClient(annotators=['sentiment','tokenize','ssplit','pos','lemma','ner', 'depparse'], timeout=60000, memory='16G') as client:
# submit the request to the server
for ind in text_df.index:
ann = client.annotate(text_df["text1"][ind])
dp = sentence.basicDependencies
token_dict = {sentence.token[i].tokenEndIndex-
offset:sentence.token[i].word for i in range(0,
len(sentence.token))}offset += len(sentence.token)
out_parse = [(dp.edge[i].source, dp.edge[i].target,dp.edge[i].dep,)
for i in range(0, len(dp.edge))]
for source, target,dep in out_parse:
print(dep, token_dict[source], token_dict[target])
text_df1['dep'][j]=dep
text_df1['source'][j]=token_dict[source]
text_df1['target'][j]=token_dict[target]
text_df1['Sent_id'][j]=text_sentences
j=j+1
for token in sentence.token:
text_df2['target'][h]=token.word
text_df2['pos'][h]=token.pos
text_df2['Sent_id'][h]=text_sentences
h=h+1
This code basically creates dataframe with dependency parser(dep),source and target tokens in one dataframe(text_df1) and each token with POS value(text_df2) in another. My requirement in to get each token/word in a sentence with their nouns and related adjectives.Finally I combined those two tables and got the required dataframe. But my issue now is ,I have sentence "raj went to office and his office is very far"I am seeing the given below for the text_df1: nsubj went raj nmod went office dep went is case office to cc office and conj office office nmod:poss office his advmod is far advmod far very
Ideally I should see a dependency parser with nsubj(far,office) but i am missing this particular value,could somewhen help me out where I missed ..?
when I checked in http://nlp.stanford.edu:8080/parser/index.jsp, the same sentences I am seeing nsubj(far,office) as well..
I guess I need to write the python code to generate 'Universal dependencies, enhanced 'in their website. Can anyone help pls