Kappa is the ratio of the proportion of times that the appraisers agree (corrected for chance agreement) to the maximum proportion of times that the appraisers could agree (corrected for chance agreement).
Use kappa statistics to assess the degree of agreement of the nominal or ordinal ratings made by multiple appraisers when the appraisers evaluate the same samples.
Minitab can calculate both Fleiss's kappa and Cohen's kappa. Cohen's kappa is a popular statistic for measuring assessment agreement between 2 raters. Fleiss's kappa is a generalization of Cohen's kappa for more than 2 raters. In Attribute Agreement Analysis, Minitab calculates Fleiss's kappa by default.
Minitab can calculate Cohen's kappa when your data satisfy the following requirements:
- To calculate Cohen's kappa for Within Appraiser, you must have 2 trials for each appraiser.
- To calculate Cohen's kappa for Between Appraisers, you must have 2 appraisers with 1 trial.
- To calculate Cohen's kappa for Each Appraiser vs Standard and All Appraisers vs Standard, you must provide a standard for each sample.
Kappa values range from –1 to +1. The higher the value of kappa, the stronger the agreement, as follows:
- When Kappa = 1, perfect agreement exists.
- When Kappa = 0, agreement is the same as would be expected by chance.
- When Kappa < 0, agreement is weaker than expected by chance; this rarely occurs.
The AIAG suggests that a kappa value of at least 0.75 indicates good agreement. However, larger kappa values, such as 0.90, are preferred.
When you have ordinal ratings, such as defect severity ratings on a scale of 1–5, Kendall's coefficients, which account for ordering, are usually more appropriate statistics to determine association than kappa alone.
For more information, see Kappa statistics and Kendall's coefficients.