# Interpret the key results for Attribute Agreement Analysis

Complete the following steps to interpret an attribute agreement analysis. Key output includes kappa statistics, Kendall's statistics, and the attribute agreement graphs.

## Step 1: Evaluate the appraiser agreement visually

To determine the consistency of each appraiser's ratings, evaluate the Within Appraisers graph. Compare the percentage matched (blue circle) with the confidence interval for the percentage matched (red line) for each appraiser.

To determine the correctness of each appraiser's ratings, evaluate the Appraiser vs Standard graph. Compare the percentage matched (blue circle) with the confidence interval for the percentage matched (red line) for each appraiser.

###### Note

Minitab displays the Within Appraisers graph only when you have multiple trials.

## Step 2: Assess the consistency of responses for each appraiser

To determine the consistency of each appraiser's ratings, evaluate the kappa statistics in the Within Appraisers table. When the ratings are ordinal, you should also evaluate the Kendall's coefficients of concordance. Minitab displays the Within Appraiser table when each appraiser rates an item more than once.

Use kappa statistics to assess the degree of agreement of the nominal or ordinal ratings made by multiple appraisers when the appraisers evaluate the same samples.

Kappa values range from –1 to +1. The higher the value of kappa, the stronger the agreement, as follows:
• When Kappa = 1, perfect agreement exists.
• When Kappa = 0, agreement is the same as would be expected by chance.
• When Kappa < 0, agreement is weaker than expected by chance; this rarely occurs.

The AIAG suggests that a kappa value of at least 0.75 indicates good agreement. However, larger kappa values, such as 0.90, are preferred.

When you have ordinal ratings, such as defect severity ratings on a scale of 1–5, Kendall's coefficients, which account for ordering, are usually more appropriate statistics to determine association than kappa alone.

###### Note

Remember that the Within Appraisers table indicates whether the appraisers' ratings are consistent, but not whether the ratings agree with the reference values. Consistent ratings aren't necessarily correct ratings.

## Step 3: Assess the correctness of responses for each appraiser

To determine the correctness of each appraiser's ratings, evaluate the kappa statistics in the Each Appraiser vs Standard table. When the ratings are ordinal, you should also evaluate the Kendall's correlation coefficients. Minitab displays the Each Appraiser vs Standard table when you specify a reference value for each sample.

Use kappa statistics to assess the degree of agreement of the nominal or ordinal ratings made by multiple appraisers when the appraisers evaluate the same samples.

Kappa values range from –1 to +1. The higher the value of kappa, the stronger the agreement, as follows:
• When Kappa = 1, perfect agreement exists.
• When Kappa = 0, agreement is the same as would be expected by chance.
• When Kappa < 0, agreement is weaker than expected by chance; this rarely occurs.

The AIAG suggests that a kappa value of at least 0.75 indicates good agreement. However, larger kappa values, such as 0.90, are preferred.

When you have ordinal ratings, such as defect severity ratings on a scale of 1–5, Kendall's coefficients, which account for ordering, are usually more appropriate statistics to determine association than kappa alone.

## Step 4: Assess the consistency of responses between appraisers

To determine the consistency between the appraiser's ratings, evaluate the kappa statistics in the Between Appraisers table. When the ratings are ordinal, you should also evaluate the Kendall's coefficient of concordance.

Use kappa statistics to assess the degree of agreement of the nominal or ordinal ratings made by multiple appraisers when the appraisers evaluate the same samples.

Kappa values range from –1 to +1. The higher the value of kappa, the stronger the agreement, as follows:
• When Kappa = 1, perfect agreement exists.
• When Kappa = 0, agreement is the same as would be expected by chance.
• When Kappa < 0, agreement is weaker than expected by chance; this rarely occurs.

The AIAG suggests that a kappa value of at least 0.75 indicates good agreement. However, larger kappa values, such as 0.90, are preferred.

When you have ordinal ratings, such as defect severity ratings on a scale of 1–5, Kendall's coefficients, which account for ordering, are usually more appropriate statistics to determine association than kappa alone.

###### Note

Remember that the Between Appraisers table indicates whether the appraisers' ratings are consistent, but not whether the ratings agree with the reference values. Consistent ratings aren't necessarily correct ratings.

## Step 5: Assess the correctness of responses for all appraisers

To determine the correctness of all the appraiser's ratings, evaluate the kappa statistics in the All Appraisers vs Standard table. When the ratings are ordinal, you should also evaluate the Kendall's coefficients of concordance.

Use kappa statistics to assess the degree of agreement of the nominal or ordinal ratings made by multiple appraisers when the appraisers evaluate the same samples.

Kappa values range from –1 to +1. The higher the value of kappa, the stronger the agreement, as follows:
• When Kappa = 1, perfect agreement exists.
• When Kappa = 0, agreement is the same as would be expected by chance.
• When Kappa < 0, agreement is weaker than expected by chance; this rarely occurs.

The AIAG suggests that a kappa value of at least 0.75 indicates good agreement. However, larger kappa values, such as 0.90, are preferred.

When you have ordinal ratings, such as defect severity ratings on a scale of 1–5, Kendall's coefficients, which account for ordering, are usually more appropriate statistics to determine association than kappa alone.

By using this site you agree to the use of cookies for analytics and personalized content.  Read our policy