# Example of Attribute Agreement Analysis

Fabric appraisers at a textile printing company rate the print quality of cotton fabric on a 1 to 5 point scale. The quality engineer wants to assess the consistency and correctness of the appraisers' ratings. The engineer asks four appraisers to rate print quality on 50 samples of fabric twice, in random order.

Because the data include a known standard for each sample, the quality engineer can assess the consistency and correctness of ratings compared to the standard as well as compared to other appraisers.

1. Open the sample data, TextilePrintQuality.MTW.
2. Choose Stat > Quality Tools > Attribute Agreement Analysis.
3. In Data are arranged as, select Attribute column and enter Response.
4. In Samples, enter Sample.
5. In Appraisers, enter Appraiser.
6. In Known standard/attribute, enter Standard.
7. Select Categories of the attribute data are ordered.
8. Click OK.

## Interpret the results

Within Appraisers table
Because each appraiser provides two or more ratings for each sample, the engineer can evaluate the consistency of each appraiser.
All of the appraisers have good match rates, from Amanda with 100% to Eric with 86%.
The p-value for Fleiss' kappa statistics is 0.0000 for all appraisers and all responses, with α = 0.05. Therefore, the engineer rejects the null hypothesis that the agreement is due to chance alone.
Because this example has ordinal ratings, the engineer examines Kendall's coefficient of concordance. The Kendall's coefficient of concordance for all appraisers ranges between 0.98446 and 1.000, which indicates a high level of agreement.
Each Appraiser vs Standard table
Because there is a known standard for each sample, the engineer can evaluate the accuracy and consistency of each of the appraiser's ratings.
Each appraiser rated 50 fabric samples (# Inspected). Amanda correctly judged 47 samples across trials (#Matched), for 94% matched. Eric correctly judged 41 samples across trials, for 82% matched.
The p-value for Fleiss' kappa is 0.0000 for all appraisers and all responses, with α = 0.05. Therefore, the engineer rejects the null hypothesis that the agreement is due to chance alone.
The Kendall's correlation coefficient for all appraisers ranges between 0.951863 and 0.975168, which confirms the high level of agreement with the standard.
Between Appraisers table
The Between Appraisers table shows that the appraisers agree on their ratings for 37 of the 50 samples.
The overall kappa value of 0.881705 indicates a good level of absolute agreement of the ratings between appraisers. The Kendall's coefficient of concordance of 0.976681 confirms this strong association.
The between-appraisers statistics do not compare the appraisers' ratings to the standard. Although the appraisers' ratings may be consistent, these statistics do not indicate whether the ratings are correct.
All Appraisers vs Standard table
Because there is a known standard for each sample, the engineer can evaluate the accuracy of all the appraisers' ratings.
The appraisers matched 37 of 50 ratings of all appraiser assessments with the known standard, for 74.0% matched.
The overall kappa value of 0.912082 indicates a good level of absolute agreement of the ratings between appraisers and with the standard. The Kendall's coefficient of concordance of 0.965563 confirms this strong association.
The all appraisers versus standard statistics do compare the appraisers' ratings to the standard. The engineer can conclude that the appraisers' ratings are consistent and correct.
###### Note

The p-value of 0.0000 in the output is rounded. But you can safely conclude that the p-value is very low and is < 0.00005.