Complete the following steps to interpret an attribute gage study. Key output includes bias metrics, model fit indicators, and the gage performance curve.

The bias is a measure of a measurement system's accuracy. Bias is calculated as the difference between the known standard value of a reference part and the observed average measurement. A low bias value indicates that the attribute gage measures parts close to their reference values.

To determine whether bias in the measurement system is statistically significant, compare the p-value to the significance level. Usually, a significance level (denoted as α or alpha) of 0.05 works well. A significance level of 0.05 indicates a 5% risk of concluding that bias exists when there is no significant bias.

If you change the default setting to use the regression method instead of the AIAG method, the p-value may slightly differ.

The normal probability plot shows the percent of acceptances for each reference value. Because no actual measurements from the gage are available to estimate bias and repeatability, Minitab calculates bias and repeatability by fitting the normal distribution curve using the calculated probabilities of acceptance and the known reference values for all parts.

If the measurement errors follow a normal distribution, the calculated probabilities fall along a straight line. A regression line is fit to the probabilities.

The R-sq (R^{2}) value for the fitted regression line indicates the percentage of the variation in the probability of acceptance responses that is explained by the regression model. R^{2} ranges from 0 to 100%. Usually, the higher the R^{2} value, the better the model fits your data. R^{2} values that are greater than 90% usually indicate a very good fit of the data.