1

I have a free text description based on which I need to perform a classification. For example the description can be that of an incident. Based on the description of the incident , I need to predict the risk associated with the event . For eg : "A murder in town" - this description is a candidate for "high" risk.

I tried logistic regression but realized that currently there is support only for binary classification. For Multi class classification ( there are only three possible values ) based on free text description , what would be the most suitable algorithm? ( Linear Regression or Naive Bayes )

gsamaras
  • 71,951
  • 46
  • 188
  • 305
lives
  • 1,243
  • 5
  • 25
  • 61

2 Answers2

2

Since you are using , I assume you have , so -I am no expert- but after reading your answer, I would like to make some points.

Create the Training (80%) and Testing Data Sets (20%)

I would partition my data to Training (60-70%), Testing (15-20%) and Evaluation (15-20%) sets..

The idea is that you can fine tune your classification algorithm w.r.t. the Training set, but we really want to do with with Classification tasks, is to have them classify unseen data. So fine tune your algorithm with the Testing set, and when you are done, use the Evaluation set, to get a real understanding of how things work!

Stop words

If your data are articles from Newspapers and such,I personally haven't seen any significant improvement by using more sophisticated stop words removal approaches...

But that's just a personal statement, but if I were you, I wouldn't focus on that step.

Term Frequency

How about using Term Frequency-Inverse Document Frequency (TF-IDF) term weighting instead? You may want to read: How can I create a TF-IDF for Text Classification using Spark?

I would try both and compare!

Multinomial

Do you have any particular reason to try the Multinomial Distribution? If no, since when n is 1 and k is 2 the multinomial distribution is the Bernoulli distribution, as stated in Wikipedia, which is supported.

Try both and compare ( this is something you have to get used to, if you wish to make your model better! :) )


I also see that offers Random forests, which might worth a read, at least! ;)


If your data is not that big, I would also try Support vector machines (SVMs), from scikit-learn, which however supports , so you should switch to or plain , abandoning . BTW, if you are actually going for sklearn, this might come in handy: How to split into train, test and evaluation sets in sklearn?, since Pandas plays nicely along with sklearn.

Hope this helps!


Off-topic:

This is really not the way to ask a question in Stack Overflow. Read How to ask a good question?

Personally, if I were you, I would do all the things you have done in your answer first, and then post a question, summarizing my approach.

As for the bounty, you may want to read: How does the Bounty System work?

Community
  • 1
  • 1
gsamaras
  • 71,951
  • 46
  • 188
  • 305
1

This is how I solved the above problem.

Though prediction accuracy is not bad ,the model has to be tuned further for better results.

Experts , please revert back if you find anything wrong.

My input data frame has two columns "Text" and "RiskClassification"

Below are the sequence of steps to predict using Naive Bayes in Java

  1. Add a new column "label" to the input dataframe . This column will basically decode the risk classification like below
sqlContext.udf().register("myUDF", new UDF1<String, Integer>() {
            @Override
            public Integer call(String input) throws Exception {
                if ("LOW".equals(input))
                    return 1;
                if ("MEDIUM".equals(input))
                    return 2;
                if ("HIGH".equals(input))
                    return 3;
                return 0;
            }
        }, DataTypes.IntegerType);

samplingData = samplingData.withColumn("label", functions.callUDF("myUDF", samplingData.col("riskClassification")));
  1. Create the Training ( 80 % ) and Testing Data Sets ( 20 % )

For eg :

DataFrame lowRisk = samplingData.filter(samplingData.col("label").equalTo(1));
DataFrame lowRiskTraining = lowRisk.sample(false, 0.8);
  1. Union All the dataframes to build the complete training data

  2. Building test data is slightly tricky . Test Data should have all data which is not present in the training data

  3. Start transformation of training data and build the model

6 . Tokenize the text column in the training data set

Tokenizer tokenizer = new Tokenizer().setInputCol("text").setOutputCol("words");
DataFrame tokenized = tokenizer.transform(trainingRiskData);
  1. Remove Stop Words. (Here you can also do advanced operations like lemme, stemmer, POS etc using Stanford NLP library)
StopWordsRemover remover = new StopWordsRemover().setInputCol("words").setOutputCol("filtered");
DataFrame stopWordsRemoved = remover.transform(tokenized);
  1. Compute Term Frequency using HashingTF. CountVectorizer is another way to do this
int numFeatures = 20;
HashingTF hashingTF = new HashingTF().setInputCol("filtered").setOutputCol("rawFeatures")
        .setNumFeatures(numFeatures);
DataFrame rawFeaturizedData = hashingTF.transform(stopWordsRemoved);

IDF idf = new IDF().setInputCol("rawFeatures").setOutputCol("features");
IDFModel idfModel = idf.fit(rawFeaturizedData);

DataFrame featurizedData = idfModel.transform(rawFeaturizedData);
  1. Convert the featurized input into JavaRDD . Naive Bayes works on LabeledPoint
JavaRDD<LabeledPoint> labelledJavaRDD = featurizedData.select("label", "features").toJavaRDD()
    .map(new Function<Row, LabeledPoint>() {

        @Override
        public LabeledPoint call(Row arg0) throws Exception {
            LabeledPoint labeledPoint = new LabeledPoint(new Double(arg0.get(0).toString()),
                    (org.apache.spark.mllib.linalg.Vector) arg0.get(1));
            return labeledPoint;
        }
    });
  1. Build the model
NaiveBayes naiveBayes = new NaiveBayes(1.0, "multinomial");
NaiveBayesModel naiveBayesModel = naiveBayes.train(labelledJavaRDD.rdd(), 1.0);
  1. Run all the above transformations on the test data also

  2. Loop through the test data frame and perform the below actions

  3. Create a LabeledPoint using the "label" and "features" in the test data frame

For eg : If the test data frame has label and features in the third and seventh column , then

LabeledPoint labeledPoint = new LabeledPoint(new Double(dataFrameRow.get(3).toString()),
(org.apache.spark.mllib.linalg.Vector) dataFrameRow.get(7));
  1. Use the Prediction Model to predict the label
double predictedLabel = naiveBayesModel.predict(labeledPoint.features());
  1. Add the predicted label also as a column to the test data frame.

  2. Now test data frame has the expected label and the predicted label.

  3. You can export the test data to csv and do analysis or you can compute the accuracy programatically as well.

gsamaras
  • 71,951
  • 46
  • 188
  • 305
lives
  • 1,243
  • 5
  • 25
  • 61