0

I am working on NLP project based on stanford Coe NLP.I have got the sentiment corresponding to each text/sentence-thanks to the document here Now I need to get the list of nouns and related adjectives for each text in data frame column,there by I can bucket them based on adjective sentiment.I tried the stanford website,but didnt find any sample function /code to achieve this.can I get a help on this please. TIA

For my text_df dataframe,

with CoreNLPClient(annotators=['sentiment','tokenize','ssplit','pos','lemma','ner', 'depparse'], timeout=60000, memory='16G') as client:
    # submit the request to the server
    for ind in text_df.index:                                                                                    
        ann = client.annotate(text_df["text1"][ind])
        dp = sentence.basicDependencies
        token_dict = {sentence.token[i].tokenEndIndex- 
        offset:sentence.token[i].word for i in range(0, 
        len(sentence.token))}offset += len(sentence.token)

        out_parse = [(dp.edge[i].source, dp.edge[i].target,dp.edge[i].dep,) 
        for i in range(0, len(dp.edge))]
            for source, target,dep in out_parse:
                print(dep, token_dict[source], token_dict[target])
                text_df1['dep'][j]=dep
                text_df1['source'][j]=token_dict[source]
                text_df1['target'][j]=token_dict[target]
                text_df1['Sent_id'][j]=text_sentences
                j=j+1
            for token in sentence.token:
                text_df2['target'][h]=token.word
                text_df2['pos'][h]=token.pos
                text_df2['Sent_id'][h]=text_sentences
                h=h+1 

This code basically creates dataframe with dependency parser(dep),source and target tokens in one dataframe(text_df1) and each token with POS value(text_df2) in another. My requirement in to get each token/word in a sentence with their nouns and related adjectives.Finally I combined those two tables and got the required dataframe. But my issue now is ,I have sentence "raj went to office and his office is very far"I am seeing the given below for the text_df1: nsubj went raj nmod went office dep went is case office to cc office and conj office office nmod:poss office his advmod is far advmod far very

Ideally I should see a dependency parser with nsubj(far,office) but i am missing this particular value,could somewhen help me out where I missed ..?

when I checked in http://nlp.stanford.edu:8080/parser/index.jsp, the same sentences I am seeing nsubj(far,office) as well..

I guess I need to write the python code to generate 'Universal dependencies, enhanced 'in their website. Can anyone help pls

p-a-o-l-o
  • 9,807
  • 2
  • 22
  • 35
  • 1
    Hello! Welcome to SO! Does your question answer this? https://stackoverflow.com/questions/56527814/stanford-typed-dependencies-using-corenlp-in-python –  Dec 17 '19 at 08:54
  • Does this answer your question? [Stanford typed dependencies using coreNLP in python](https://stackoverflow.com/questions/56527814/stanford-typed-dependencies-using-corenlp-in-python) – alexisdevarennes Dec 17 '19 at 11:37
  • @VenkataShivaram Thanks a lot for the link! It helped a lot,but came across another issue-I gave the sentence -"Miller is nice but Phill is good and he sold her fast car" I am getting the dependencies as -(Nice,Miller),(good,Phill),(car,fast)-I was expacting as (fast,car) bu it parsed as 'amod',so kinda confused.Could you please help here.. TIA – Krishnalekha Moolayil Dec 18 '19 at 06:01
  • @alexisdevarennes >>Yes!! It helps a lot,but I need to understand how the parsing works and identifying nsubj and amod here..Actually my requirement is to get the noun and related adjective for each sentence is a dataframe.. TIA – Krishnalekha Moolayil Dec 18 '19 at 06:03
  • @KrishnalekhaMoolayil Well! Can you please share the code that you had tried. –  Dec 18 '19 at 07:03
  • For my text_df dataframe, with CoreNLPClient(annotators=['sentiment','tokenize','ssplit','pos','lemma','ner', 'depparse'], timeout=60000, memory='16G') as client: # submit the request to the server for ind in text_df.index: – Krishnalekha Moolayil Dec 19 '19 at 08:21
  • @VenkataShivaram I have updated my query ..can you check pls – Krishnalekha Moolayil Dec 19 '19 at 08:40

0 Answers0