1

I'm using the model builder addon for OpenNLP to create a better NER model. According to this post, I have used the code posted by markg :

public class ModelBuilderAddonUse {

  private static List<String> getSentencesFromSomewhere() throws Exception 
  {
      List<String> list = new ArrayList<String>();
      BufferedReader reader = new BufferedReader(new FileReader("D:\\Work\\workspaces\\default\\UpdateModel\\documentrequirements.docx"));
      String line;
      while ((line = reader.readLine()) != null) 
      {
          list.add(line);
      }
      reader.close();
      return list;

    }

  public static void main(String[] args) throws Exception {
    /**
     * establish a file to put sentences in
     */
    File sentences = new File("D:\\Work\\workspaces\\default\\UpdateModel\\sentences.text");

    /**
     * establish a file to put your NER hits in (the ones you want to keep based
     * on prob)
     */
    File knownEntities = new File("D:\\Work\\workspaces\\default\\UpdateModel\\knownentities.txt");

    /**
     * establish a BLACKLIST file to put your bad NER hits in (also can be based
     * on prob)
     */
    File blacklistedentities = new File("D:\\Work\\workspaces\\default\\UpdateModel\\blentities.txt");

    /**
     * establish a file to write your annotated sentences to
     */
    File annotatedSentences = new File("D:\\Work\\workspaces\\default\\UpdateModel\\annotatedSentences.txt");

    /**
     * establish a file to write your model to
     */
    File theModel = new File("D:\\Work\\workspaces\\default\\UpdateModel\\nl-ner-person.bin");


//------------create a bunch of file writers to write your results and sentences to a file

    FileWriter sentenceWriter = new FileWriter(sentences, true);
    FileWriter blacklistWriter = new FileWriter(blacklistedentities, true);
    FileWriter knownEntityWriter = new FileWriter(knownEntities, true);

//set some thresholds to decide where to write hits, you don't have to use these at all...
    double keeperThresh = .95;
    double blacklistThresh = .7;


    /**
     * Load your model as normal
     */
    TokenNameFinderModel personModel = new TokenNameFinderModel(new File("D:\\Work\\workspaces\\default\\UpdateModel\\nl-ner-person.bin"));
    NameFinderME personFinder = new NameFinderME(personModel);
    /**
     * do your normal NER on the sentences you have
     */
   for (String s : getSentencesFromSomewhere()) {
      sentenceWriter.write(s.trim() + "\n");
      sentenceWriter.flush();

      String[] tokens = s.split(" ");//better to use a tokenizer really
      Span[] find = personFinder.find(tokens);
      double[] probs = personFinder.probs();
      String[] names = Span.spansToStrings(find, tokens);
      for (int i = 0; i < names.length; i++) {
        //YOU PROBABLY HAVE BETTER HEURISTICS THAN THIS TO MAKE SURE YOU GET GOOD HITS OUT OF THE DEFAULT MODEL
        if (probs[i] > keeperThresh) {
          knownEntityWriter.write(names[i].trim() + "\n");
        }
        if (probs[i] < blacklistThresh) {
          blacklistWriter.write(names[i].trim() + "\n");
        }
      }
      personFinder.clearAdaptiveData();
      blacklistWriter.flush();
      knownEntityWriter.flush();
    }
    //flush and close all the writers
    knownEntityWriter.flush();
    knownEntityWriter.close();
    sentenceWriter.flush();
    sentenceWriter.close();
    blacklistWriter.flush();
    blacklistWriter.close();

    /**
     * THIS IS WHERE THE ADDON IS GOING TO USE THE FILES (AS IS) TO CREATE A NEW MODEL. YOU SHOULD NOT HAVE TO RUN THE FIRST PART AGAIN AFTER THIS RUNS, JUST NOW PLAY WITH THE
     * KNOWN ENTITIES AND BLACKLIST FILES AND RUN THE METHOD BELOW AGAIN UNTIL YOU GET SOME DECENT RESULTS (A DECENT MODEL OUT OF IT).
     */
    DefaultModelBuilderUtil.generateModel(sentences, knownEntities, blacklistedentities, theModel, annotatedSentences, "person", 3);


  }
}

It also runs, but my output quits at :

    annotated sentences: 1862
    knowns: 58
    Building Model using 1862 annotations
    reading training data...

But in the example in the post it should go futher like this :

Indexing events using cutoff of 5

    Computing event counts...  done. 561755 events
    Indexing...  done.
Sorting and merging events... done. Reduced 561755 events to 127362.
Done indexing.
Incorporating indexed data for training...  
done.
    Number of Event Tokens: 127362
        Number of Outcomes: 3
      Number of Predicates: 106490
...done.

Can anyone help me to fix this problem, so I can generate a model? I have searched realy a lot but cant find any good documutation about it. Would really appreciat it, thanks.

Patrick
  • 331
  • 3
  • 18

1 Answers1

0

Correct the path to your training data file like this:

File sentences = new File("D:/Work/workspaces/default/UpdateModel/sentences.text");

instead of

File sentences = new File("D:\\Work\\workspaces\\default\\UpdateModel\\sentences.text");

Update

This is how is used, by adding the files to the project folder. Try it like this -

File sentences = new File("src/training/resources/CreateModel/sentences.txt");

Check my respository for reference on Github

This should help.

iamgr007
  • 966
  • 1
  • 8
  • 28
  • This dont work, get the following error : `Invalid escape sequence (valid ones are \b \t \n \f \r \" \' \\ )` – Patrick Nov 02 '17 at 10:14
  • The path is correct now, but the builder still quits at `Building Model using 1358 annotations reading training data...` – Patrick Nov 02 '17 at 11:35
  • 1
    You've 1862 annotated sentences right? Check for any error in the sentence 1358 – iamgr007 Nov 02 '17 at 11:50
  • I dont know how the `1862` came there, but if I run it it end at `Building Model using 1358 annotations`. When I go to `sentence 1358` its the last line from `annotated sentences` nothing different the the other lines. Any idea? – Patrick Nov 02 '17 at 14:10
  • it might have something to do with your training set. I guess it's too late for my suggestion @Patrick, I just wanted to know if the issue was resolved! How did it go? – iamgr007 Jan 25 '18 at 23:30