0

I need to calculate precision and recall from a CSV that contain a multiclass classification.

To be more specific, my csv is structured as follow:

real_class1, classified_class1
real_class2, classified_class3
real_class3, classified_class4
real_class4, classified_class2

In total there are six class classified.

In the binary example I have no problem to understand how calculate True Positive, False Positive, True Negative and False Negative. But with a multi-class I don't know how proceed.

Can someone show me some example? Possibly in python?

Steve
  • 406
  • 3
  • 11
  • Build a confusion matrix, and follow the instructions [here](https://stackoverflow.com/questions/48100173/how-to-get-precision-recall-and-f-measure-from-confusion-matrix-in-python/48101802#48101802) – desertnaut Mar 05 '18 at 19:14
  • Any suggest how create confusion matrix? – Steve Mar 05 '18 at 20:03
  • 1
    http://scikit-learn.org/stable/modules/generated/sklearn.metrics.confusion_matrix.html - you have both `y_pred` and `y_true` in your CSV – desertnaut Mar 05 '18 at 20:30

1 Answers1

-2

As suggested in the comment, you have to create the confusion matrix and follow this steps:

(I'm assuming that you are using spark in order to have better performance with machine learning processing)

from __future__ import division
import pandas as pd
import numpy as np
import pickle
from pyspark import SparkContext, SparkConf
from pyspark.sql import SQLContext, functions as fn
from sklearn.metrics import confusion_matrix

def getFirstColumn(line):
    parts = line.split(',')
    return parts[0]

def getSecondColumn(line):
    parts = line.split(',')
    return parts[1]

# Initialization
conf= SparkConf()
conf.setAppName("ConfusionMatrixPrecisionRecall")

sc = SparkContext(conf= conf) # SparkContext
sqlContext = SQLContext(sc) # SqlContext

data = sc.textFile('YOUR_FILE_PATH') # Load dataset

y_true = data.map(getFirstColumn).collect() # Split from line the class
y_pred = data.map(getSecondColumn).collect() # Split from line the tags

confusion_matrix = confusion_matrix(y_true, y_pred)
print("Confusion matrix:\n%s" % confusion_matrix)

# The True Positives are simply the diagonal elements
TP = np.diag(confusion_matrix)
print("\nTP:\n%s" % TP)

# The False Positives are the sum of the respective column, minus the diagonal element (i.e. the TP element
FP = np.sum(confusion_matrix, axis=0) - TP
print("\nFP:\n%s" % FP)

# The False Negatives are the sum of the respective row, minus the         diagonal (i.e. TP) element:
FN = np.sum(confusion_matrix, axis=1) - TP
print("\nFN:\n%s" % FN)

num_classes = INTEGER #static kwnow a priori, put your number of classes
TN = []

for i in range(num_classes):
    temp = np.delete(confusion_matrix, i, 0)    # delete ith row
    temp = np.delete(temp, i, 1)  # delete ith column
    TN.append(sum(sum(temp)))
print("\nTN:\n%s" % TN)




precision = TP/(TP+FP)
recall = TP/(TP+FN)

print("\nPrecision:\n%s" % precision)

print("\nRecall:\n%s" % recall)
Aso Strife
  • 1,089
  • 3
  • 12
  • 31
  • 2
    1) There is no mention of Spark in OP 2) you import `pandas`, `pickle`, and `pyspark.sql.functions` without using them 3) you initialize `sqlContext` without using it 4) you evidently have used parts of my linked answer verbatim, without a reference (let alone an upvote)... – desertnaut Mar 06 '18 at 15:08