0

I am trying to create a ROC curve for an SVM and here is the code I have used :

#learning from training
#tuned <- tune.svm(y~., data=train, gamma = 10^(-6:-1), cost = 10^(1:2))
summary(tuned)


svmmodel<-svm(y~., data=train, method="C-classification",
              kernel="radial", gamma = 0.01, cost = 100,cross=5, probability=TRUE) 

svmmodel

#predicting the test data
svmmodel.predict<-predict(svmmodel,subset(test,select=-y),decision.values=TRUE)
svmmodel.probs<-attr(svmmodel.predict,"decision.values")
svmmodel.class<-predict(svmmodel,test,type="class")
svmmodel.labels<-test$y
#analyzing result
svmmodel.confusion<-confusion.matrix(svmmodel.labels,svmmodel.class)
svmmodel.accuracy<-prop.correct(svmmodel.confusion)

#roc analysis for test data
svmmodel.prediction<-prediction(svmmodel.probs,svmmodel.labels)
svmmodel.performance<-performance(svmmodel.prediction,"tpr","fpr")
svmmodel.auc<-performance(svmmodel.prediction,"auc")@y.values[[1]]

but the problem that the curve ROC is somthing like this :

enter image description here

lmo
  • 37,904
  • 9
  • 56
  • 69
  • This is very data-specific and no-one can say much without seeing what's in tuned. See http://stackoverflow.com/questions/5963269/how-to-make-a-great-r-reproducible-example – Calimo Nov 05 '15 at 07:32

4 Answers4

1

I have answered similar question at MATLAB - generate confusion matrix from classifier
By using the code given at the link above, If you get inverse ROC curve like you have shown in your figure then replace the following lines (in the code given at the link):
1. Replace the line in the code given on the link.

b_pred = (tot_op>=th_vals(i,1)); 

by

b_pred = (tot_op<=th_vals(i,1));  

2. Replace the line

AUC = sum(0.5*(sens(2:end)+sens(1:end-1)).*(cspec(2:end) - cspec(1:end-1))); 

by

AUC = sum(0.5*(sens(2:end)+sens(1:end-1)).*(cspec(1:end-1) - cspec(2:end)));

in the code given on the link.

Community
  • 1
  • 1
Vikrant Karale
  • 147
  • 1
  • 15
1

This can be done very easily with 'ROCR' package. I use something like this to get the ROC curve.

p1<- predict(svm,test, type="decision")
pr<-prediction(p1, test$status)
prf<- performance(pr, measure="tpr",x.measure="fpr")
plot(prf)
lines(x = c(0,1), y = c(0,1),col="blue")
0

Did you solve the problem? I had same problem with you, the reverse ROC and AUC were obtained from the testset.

In my case, It can be solved to sort the training dataset.

For example,

train <- train[order(train$y, decreasing = TRUE),]
A.S.An
  • 1
0

Instead of using decision.values, try this:

fit = f(x,y,probability = TRUE)
pred = prediction(attr(predict(fit,x_test, probability = TRUE), "probabilities")[,2], test[,colnames(test) == y_name])