I have 5 raters who have rated 10 subjects. I've chosen to use the Light's Kappa to calculate inter-rater reliability because I have multiple raters. My issue is that when there is strong agreement between the raters, Light's kappa cannot be calculated due to lack of variability, and I've followed the updated post here which suggests using the Raters package in R when there is strong agreement.
My issue is that the Raters package calculates Fleiss' kappa, which from my understanding,is not suitable for inter-rater reliability where the same raters rate all subjects (such as in my case). My question is what type of kappa statistic I should be calculating in cases where there is strong agreement?
#install.packages("irr")
library(irr)
#install.packages('raters')
library(raters)
#mock dataset
rater1<- c(1,1,1,1,1,1,1,1,0,1)
rater2<- c(1,1,1,1,1,1,1,1,1,1)
rater3<- c(1,1,0,1,1,0,1,1,1,1)
rater4<- c(1,1,1,1,1,1,1,1,1,1)
rater5<- c(1,1,1,1,1,1,0,1,1,1)
df <- data.frame(rater1, rater2, rater3, rater4, rater5)
#light's kappa
kappam.light(df)
#kappa using raters package
data(df)
concordance(df, test = 'Normal')