3

I am writing a paper about the validity of a billing code in hospitalized children. I am a very novice R studio user. I need the confidence intervals for the sensitive and specificity and positive and negative predictive values but I can't figure out how to do it.

My data has 3 columns : ID, true value, billing value

Here is my code:

confusionMatrix(table(finalcodedataset$billing_value, finalcodedataset$true_value), 
                positive="1", boot=TRUE, boot_samples=4669, alpha=0.05)

here is the output:

Confusion Matrix and Statistics

       0    1
  0 4477  162

  1   10   20

               Accuracy : 0.9632          
                 95% CI : (0.9574, 0.9684)
    No Information Rate : 0.961           
    P-Value [Acc > NIR] : 0.238           

                  Kappa : 0.1796          
 Mcnemar's Test P-Value : <2e-16          

            Sensitivity : 0.109890        
            Specificity : 0.997771        
         Pos Pred Value : 0.666667        
         Neg Pred Value : 0.965079        
             Prevalence : 0.038981        
         Detection Rate : 0.004284        
   Detection Prevalence : 0.006425        
      Balanced Accuracy : 0.553831        

       'Positive' Class : 1   
camille
  • 16,432
  • 18
  • 38
  • 60

3 Answers3

2

You can use epiR package for this purpouse.

Example:

library(epiR)
data <- as.table(matrix(c(670,202,74,640), nrow = 2, byrow = TRUE))
rval <- epi.tests(data, conf.level = 0.95)
print(rval)

          Outcome +    Outcome -      Total
Test +          670          202        872
Test -           74          640        714
Total           744          842       1586

Point estimates and 95 % CIs:
---------------------------------------------------------
Apparent prevalence                    0.55 (0.52, 0.57)
True prevalence                        0.47 (0.44, 0.49)
Sensitivity                            0.90 (0.88, 0.92)
Specificity                            0.76 (0.73, 0.79)
Positive predictive value              0.77 (0.74, 0.80)
Negative predictive value              0.90 (0.87, 0.92)
Positive likelihood ratio              3.75 (3.32, 4.24)
Negative likelihood ratio              0.13 (0.11, 0.16)
---------------------------------------------------------
1

Caret and other packages use the Clopper-Pearson Interval method to calculate the confidence interval.

I consider your 2x2 reversed since the TP (True Positive) is on the bottom right. If the TP is at the top left then variables (A,B,C,D) would be switched.

D = 4477
C = 162
B = 10
A = 20

Acc = (A+D)/(A+B+C+D)
Sensitivity = A / (A + C)
Specificity = D / (D + B)
P = (A+C)/(A+B+C+D)
PPV = (Sensitivity*P)/((Sensitivity*P)+((1-Specificity)*(1-P)))
NPV = (Specificity*(1-P))/(((1 - Sensitivity)*P)+((Specificity)*(1-P)))

n = A+B+C+D
x = n - (A+D)
alpha = 0.05

ub = 1 - ((1 + (n - x + 1)/ (x * qf(alpha *.5, 2*x, 2*(n - x + 1))))^-1)
lb = 1 - ((1 + (n - x) / ((x + 1)* qf(1-(alpha*.5), 2*(x+1), 2*(n-x))))^-1)
CI = c(lb,ub)

> Acc
[1] 0.9631613
> CI
[1] 0.9573536 0.9683800
> Sensitivity
[1] 0.1098901
> Specificity
[1] 0.9977713
> PPV
[1] 0.6666667
> NPV
[1] 0.9650787

Here is also a good resource for where these formulas come from.

the_kipper
  • 47
  • 5
0

The following reproducible example is partially inspired from ROC curve from training data in caret.

library(MLeval)
library(caret)
library(pROC)

data(Sonar)
ctrl <- trainControl(method = "cv", summaryFunction = twoClassSummary, classProbs = TRUE, savePredictions = TRUE)
set.seed(42)
fit1 <- train(Class ~ ., data = Sonar,method = "rf",trControl = ctrl)


bestmodel <- merge(fit1$bestTune, fit1$pred)
mtx <- confusionMatrix(table(bestmodel$pred, bestmodel$obs))$table

 #     M   R
 # M 104  23
 # R   7  74

# 95% Confident Interval 

## Sensitivity
sens_errors <- sqrt(sensitivity(mtx) * (1 - sensitivity(mtx)) / sum(mtx[,1]))
sensLower <- sensitivity(mtx) - 1.96 * sens_errors
sensUpper <- sensitivity(mtx) + 1.96 * sens_errors


## Specificity
spec_errors <- sqrt(specificity(mtx) * (1 - specificity(mtx)) / sum(mtx[,2]))
specLower <- specificity(mtx) - 1.96 * spec_errors
specUpper <- specificity(mtx) + 1.96 * spec_errors

## Positive Predictive Values
ppv_errors <- sqrt(posPredValue(mtx) * (1 - posPredValue(mtx)) / sum(mtx[1,]))
ppvLower <- posPredValue(mtx) - 1.96 * ppv_errors
ppvUpper <- posPredValue(mtx) + 1.96 * ppv_errors


## Negative Predictive Values
npv_errors <- sqrt(negPredValue(mtx) * (1 - negPredValue(mtx)) / sum(mtx[2,]))
npvLower <- negPredValue(mtx) - 1.96 * npv_errors
npvUpper <- negPredValue(mtx) + 1.96 * npv_errors