8

I'm trying to work out what's the best model to adapt for an open named entity recognition problem (biology/chemistry, so no dictionary of entities exists but they have to be identified by context).

Currently my best guess is to adapt Syntaxnet so that instead of tagging words as N, V, ADJ etc, it learns to tag as BEGINNING, INSIDE, OUT (IOB notation).

However I am not sure which of these approaches is the best?

  • Syntaxnet
  • word2vec
  • seq2seq (I think this is not the right one as I need it to learn on two aligned sequences, whereas seq2seq is designed for sequences of differing lengths as in translation)

Would be grateful for a pointer to the right method! thanks!

Tom
  • 113
  • 1
  • 5

1 Answers1

8

Syntaxnet can be used to for named entity recognition, e.g. see: Named Entity Recognition with Syntaxnet

word2vec alone isn't very effective for named entity recognition. I don't think seq2seq is commonly used either for that task.

As drpng mentions, you may want to look at tensorflow/tree/master/tensorflow/contrib/crf. Adding an LSTM before the CRF layer would help a bit, which gives something like:

enter image description here

LSTM+CRF code in TensorFlow: https://github.com/Franck-Dernoncourt/NeuroNER

Community
  • 1
  • 1
Franck Dernoncourt
  • 77,520
  • 72
  • 342
  • 501
  • 1
    OK thank you very much! I used Syntaxnet in the end. I converted my entities to IOB notation and trained the Syntaxnet POS tagger according to the instructions here: https://github.com/tensorflow/models/tree/master/syntaxnet It worked very well, I got 78% – Tom Feb 20 '17 at 09:12
  • @Tom good to know. For benchmarking NER systems, I personally use the conll2003 dataset as the first comparison point: it is free, small enough to be fast, large enough to train ANNs, it comes with an evolution script, and it is well studied. – Franck Dernoncourt Feb 20 '17 at 15:10