I would like to compute the interrater agreement of two raters for variables with two or more categories. However, I assume that there will be a prevalence problem (i.e. certain categories are likely to appear more often in the data than others). Thus, I am interested in also getting the prevalence-adjusted bias-adjusted kappa (PABAK). As far as I have seen, the package {IRR} does not offer this option, but the epi.kappa function of {epiR} does so. However, I would prefer to use {IRR} as I would like to get Cohen's Kappa (epiR seems to be based on Fleiss' Kappa). Is there any other way in which I could obtain Cohen's Kappa and PABAK in R?
Thank you very much!