4

I want to use SparkNLP for doing sentiment analysis on a spark dataset on column column1 using the default trained model. This is my code:

DocumentAssembler docAssembler = (DocumentAssembler) new DocumentAssembler().setInputCol("column1")
                .setOutputCol("document");

Tokenizer tokenizer = (Tokenizer) ((Tokenizer) new Tokenizer().setInputCols(new String[] { "document" }))
                .setOutputCol("token");
String[] inputCols = new String[] { "token", "document" };

SentimentDetector sentiment = ((SentimentDetector) ((SentimentDetector) new SentimentDetector().setInputCols(inputCols)).setOutputCol("sentiment"));
Pipeline pipeline = new Pipeline().setStages(new PipelineStage[] { docAssembler, tokenizer, sentiment });

// Fit the pipeline to training documents.
PipelineModel pipelineFit = pipeline.fit(ds);
ds = pipelineFit.transform(ds);
ds.show();

Here ds is Dataset<Row> with columns including column column1.I am gettimg the following error.

java.util.NoSuchElementException: Failed to find a default value for dictionary
at org.apache.spark.ml.param.Params$$anonfun$getOrDefault$2.apply(params.scala:780)
at org.apache.spark.ml.param.Params$$anonfun$getOrDefault$2.apply(params.scala:780)
at scala.Option.getOrElse(Option.scala:121)
at org.apache.spark.ml.param.Params$class.getOrDefault(params.scala:779)
at org.apache.spark.ml.PipelineStage.getOrDefault(Pipeline.scala:42)
at org.apache.spark.ml.param.Params$class.$(params.scala:786)
at org.apache.spark.ml.PipelineStage.$(Pipeline.scala:42)
at com.johnsnowlabs.nlp.annotators.sda.pragmatic.SentimentDetector.train(SentimentDetector.scala:62)
at com.johnsnowlabs.nlp.annotators.sda.pragmatic.SentimentDetector.train(SentimentDetector.scala:12)
at com.johnsnowlabs.nlp.AnnotatorApproach.fit(AnnotatorApproach.scala:45)
at org.apache.spark.ml.Pipeline$$anonfun$fit$2.apply(Pipeline.scala:153)
at org.apache.spark.ml.Pipeline$$anonfun$fit$2.apply(Pipeline.scala:149)
at scala.collection.Iterator$class.foreach(Iterator.scala:891)
at scala.collection.AbstractIterator.foreach(Iterator.scala:1334)
at scala.collection.IterableViewLike$Transformed$class.foreach(IterableViewLike.scala:44)
at scala.collection.SeqViewLike$AbstractTransformed.foreach(SeqViewLike.scala:37)
at org.apache.spark.ml.Pipeline.fit(Pipeline.scala:149)

I have gone through examples but i was not able to find any clear example/documentation of doing sentiment analysis in java using default model.

AngryLeo
  • 390
  • 4
  • 23

1 Answers1

3

so Finally I figured it out. Final Code:

    DocumentAssembler docAssembler = (DocumentAssembler) new DocumentAssembler().setInputCol("column1")
                .setOutputCol("document");

Tokenizer tokenizer = (Tokenizer) ((Tokenizer) new Tokenizer().setInputCols(new String[] { "document" }))
                .setOutputCol("token");
String[] inputCols = new String[] { "token", "document" };

ViveknSentimentModel sentiment  = (ViveknSentimentModel) ViveknSentimentModel
.load("/path/to/pretained model folder");

Pipeline pipeline = new Pipeline().setStages(new PipelineStage[] { docAssembler, tokenizer, sentiment });

// Fit the pipeline to training documents.
PipelineModel pipelineFit = pipeline.fit(ds);
ds = pipelineFit.transform(ds);

Models can be downloaded from here.

AngryLeo
  • 390
  • 4
  • 23
  • You could instead do this in Scala. Should work similarly in Java ```import com.johnsnowlabs.nlp.annotator.ViveknSentimentModel val sentimentDetector = ViveknSentimentModel.pretrained(). setInputCols(Array("token", "sentence")). setOutputCol("sentiment")``` – Arash Jan 14 '21 at 19:06