37

How can I extract noun phrases from text using spacy?
I am not referring to part of speech tags. In the documentation I cannot find anything about noun phrases or regular parse trees.

Talha Tayyab
  • 8,111
  • 25
  • 27
  • 44
CentAu
  • 10,660
  • 15
  • 59
  • 85

5 Answers5

64

If you want base NPs, i.e. NPs without coordination, prepositional phrases or relative clauses, you can use the noun_chunks iterator on the Doc and Span objects:

>>> from spacy.en import English
>>> nlp = English()
>>> doc = nlp(u'The cat and the dog sleep in the basket near the door.')
>>> for np in doc.noun_chunks:
>>>     np.text
u'The cat'
u'the dog'
u'the basket'
u'the door'

If you need something else, the best way is to iterate over the words of the sentence and consider the syntactic context to determine whether the word governs the phrase-type you want. If it does, yield its subtree:

from spacy.symbols import *

np_labels = set([nsubj, nsubjpass, dobj, iobj, pobj]) # Probably others too
def iter_nps(doc):
    for word in doc:
        if word.dep in np_labels:
            yield word.subtree
lucasoldaini
  • 137
  • 9
syllogism_
  • 4,127
  • 29
  • 22
  • 1
    Dear syllogism, can you tell me what are the "probably other" tags that one can add to make the code complete? I would like also to extract things like "the baby and his toys". – user1419243 Feb 23 '18 at 15:22
  • 3
    @user1419243 check out `dir(spacy.symbols)` – duhaime Apr 25 '19 at 10:44
  • Just gives me `` – Superdooperhero Nov 19 '19 at 10:06
  • @Superdooperhero: I also got that generator object. For anyone who's interested see my answer below (which should at least clarify things). – Victoria Stuart Dec 21 '19 at 00:56
  • 1
    @Superdooperhero, that's because the `iter_nps` function defined in the answer is a generator function. If you're not familiar with the generator pattern you should read up (https://wiki.python.org/moin/Generators), but essentially they offer lazy execution to `yield` the next item each time the function is called, rather than constructing the whole list at once and keeping in memory. You can access the items that are generated by the generator function using the `next` keyword or in a loop, e.g: `for np_label in iter_nps(doc): print np_label` – mdmjsh Dec 23 '19 at 21:53
  • 4
    just in case anyone find it helpful - `from spacy.en import English` did not work for me. so instead I had to use `from spacy.lang.en import English` – ganjaam May 31 '20 at 05:09
  • Is it normal if different noun chunks are printed every time I run your topmost code? Not like entirely different, but say if 4 chunks were printed at the first go, 3 of them might be printed next. –  Jul 14 '20 at 10:04
  • This didn't work for me. Victoria Stewart's answer below did. – Utkarsh Dalal Jun 10 '22 at 14:13
7
import spacy
nlp = spacy.load("en_core_web_sm")
doc =nlp('Bananas are an excellent source of potassium.')
for np in doc.noun_chunks:
    print(np.text)
'''
  Bananas
  an excellent source
  potassium
'''

for word in doc:
    print('word.dep:', word.dep, ' | ', 'word.dep_:', word.dep_)
'''
  word.dep: 429  |  word.dep_: nsubj
  word.dep: 8206900633647566924  |  word.dep_: ROOT
  word.dep: 415  |  word.dep_: det
  word.dep: 402  |  word.dep_: amod
  word.dep: 404  |  word.dep_: attr
  word.dep: 443  |  word.dep_: prep
  word.dep: 439  |  word.dep_: pobj
  word.dep: 445  |  word.dep_: punct
'''

from spacy.symbols import *
np_labels = set([nsubj, nsubjpass, dobj, iobj, pobj])
print('np_labels:', np_labels)
'''
  np_labels: {416, 422, 429, 430, 439}
'''

https://www.geeksforgeeks.org/use-yield-keyword-instead-return-keyword-python/

def iter_nps(doc):
    for word in doc:
        if word.dep in np_labels:
            yield(word.dep_)

iter_nps(doc)
'''
  <generator object iter_nps at 0x7fd7b08b5bd0>
'''

## Modified method:
def iter_nps(doc):
    for word in doc:
        if word.dep in np_labels:
            print(word.text, word.dep_)

iter_nps(doc)
'''
  Bananas nsubj
  potassium pobj
'''

doc = nlp('BRCA1 is a tumor suppressor protein that functions to maintain genomic stability.')
for np in doc.noun_chunks:
    print(np.text)
'''
  BRCA1
  a tumor suppressor protein
  genomic stability
'''

iter_nps(doc)
'''
  BRCA1 nsubj
  that nsubj
  stability dobj
'''
Victoria Stuart
  • 4,610
  • 2
  • 44
  • 37
4

You can also get noun from a sentence like this:

    import spacy
    nlp=spacy.load("en_core_web_sm")
    doc=nlp("When Sebastian Thrun started working on self-driving cars at "
    "Google in 2007, few people outside of the company took him "
    "seriously. “I can tell you very senior CEOs of major American "
    "car companies would shake my hand and turn away because I wasn’t "
    "worth talking to,” said Thrun, in an interview with Recode earlier "
    "this week.")
    #doc text is from spacy website
    for x in doc :
    if x.pos_ == "NOUN" or x.pos_ == "PROPN" or x.pos_=="PRON":
    print(x)
    # here you can get Nouns, Proper Nouns and Pronouns
Talha Tayyab
  • 8,111
  • 25
  • 27
  • 44
2

If you want to specify more exactly which kind of noun phrase you want to extract, you can use textacy's matches function. You can pass any combination of POS tags. For example,

textacy.extract.matches(doc, "POS:ADP POS:DET:? POS:ADJ:? POS:NOUN:+")

will return any nouns that are preceded by a preposition and optionally by a determiner and/or adjective.

Textacy was built on spacy, so they should work perfectly together.

Suzana
  • 4,251
  • 2
  • 28
  • 52
1

from spacy.en import English may give you an error

No module named 'spacy.en'

All language data has been moved to a submodule spacy.lang in spacy2.0+

Please use spacy.lang.en import English

Then do all the remaining steps as @syllogism_ answered

Reza Rahemtola
  • 1,182
  • 7
  • 16
  • 30
Talha Tayyab
  • 8,111
  • 25
  • 27
  • 44