I working with standford core nlp packages. Which give set of jar files and execution unit. I could compile and run few test example.
There is one sample java example. I compiled is successfully with:
H:\Drive E\Stanford\stanfor-corenlp-full-2013~>javac -cp stanford-corenlp-3.3.0.
jar;stanford-corenlp-3.3.0-javadoc.jar;stanford-corenlp-3.3.0-models.jar;stanfor
d-corenlp-3.3.0-sources.jar; StanfordCoreNlpDemo.java
While I ran it:
H:\Drive E\Stanford\stanfor-corenlp-full-2013~>java -cp stanford-corenlp-3.3.0.
jar;stanford-corenlp-3.3.0-javadoc.jar;stanford-corenlp-3.3.0-models.jar;stanfor
d-corenlp-3.3.0-sources.jar; StanfordCoreNlpDemo
It gave exceptions:
Searching for resource: StanfordCoreNLP.properties
Searching for resource: edu/stanford/nlp/pipeline/StanfordCoreNLP.properties
Adding annotator tokenize
Adding annotator ssplit
Adding annotator pos
Reading POS tagger model from edu/stanford/nlp/models/pos-tagger/english-left3wo
rds/english-left3words-distsim.tagger ... done [8.7 sec].
Adding annotator lemma
Adding annotator ner
Loading classifier from edu/stanford/nlp/models/ner/english.all.3class.distsim.c
rf.ser.gz ... Exception in thread "main" java.lang.OutOfMemoryError: Java heap s
pace
at java.io.ObjectInputStream$HandleTable.grow(ObjectInputStream.java:344
How can I allocate memory in command line to remove above exception and execute it?
I could compile these two successfully.
java -cp "*" -mx1g edu.stanford.nlp.sentiment.SentimentPipeline -file input.txt
and
java -cp stanford-corenlp-3.3.0.jar;stanford-corenlp-3.3.0-models.jar;xom.jar;joda-time.jar -Xmx600m edu.stanford.nlp.pipeline.StanfordCoreNLP -annotators tokenize,ssplit,pos,lemma,parse -file input.txt