Kappa Agreement Test

where k=number of codes and w i j {displaystyle w_{ij}} , x i i j {displaystyle x_{ij} } and m i j {displaystyle m_{ij} are elements of weighting, observed and expected matrices. If the diagonal cells contain weights of 0 and all the off-diagonal cells contain weights of 1, this formula produces the same kappa value as the calculation shown above. The Cohen cappa coefficient (κ) is a statistic used to measure the reliability of the inter-rater (as well as the intra-consultant reliability) for qualitative (categorical) elements. [1] It is generally accepted that this is a more robust measure than the simple calculation of the percentage chord, since κ takes into account the possibility that the agreement may occur at random. There are controversies around Cohen`s kappa due to the difficulty of interpreting correspondence clues. Some researchers have suggested that it is conceptually easier to assess differences of opinion between elements. [2] For more information, see Restrictions.