3

I have 4 sets of manually tagged data for 0 and 1, by 4 different people. I have to get the final labelled data in terms of 0 and 1 using the 4 sets of manually tagged data. I have calculated the degree of agreement between the users as A-B : 0.3276, A-C : 0.3263, A-D : 0.4917, B-C : 0.2896, B-D : 0.4052, C-D : 0.3540.

I do not know how to use this to calculate the final data as a single set. Please help.

Chthonic Project
  • 8,216
  • 1
  • 43
  • 92
bronn
  • 31
  • 6

1 Answers1

3

The Kappa coefficient works only for a pair of annotators. For more than two, you need to employ an extension of it. One popular way of doing so is to use this expansion proposed by Richard Light in 1971, or to use the average expected agreement for all annotator pairs, proposed by Davies and Fleiss in 1982. I am not aware of any readily available calculator that will compute these for you, so you may have to implement the code yourself.

There is this Wikipedia page on Fleiss' kappa, however, which you might find helpful.

These techniques can only be used for nominal variables. If your data is not on the nominal scale, use a different measure like the intraclass correlation coefficient.

Chthonic Project
  • 8,216
  • 1
  • 43
  • 92