Error usually refers to how much functions, formulas, and statistics fail to fully explain or model a true or theoretical value. In other words, it is the difference between an actual and predicted value. While some degree of error or uncertainty can exist in statistical analyses, identifying and quantifying it can at least help us explain its presence. Consider a contractor hired to replace the roof on a house. The contractor can calculate an estimated price for the job based on a number of variables like the dimensions of the roof, the pitch, and even type of roof. However, variability in these and other factors can result in a different final cost. Both contractor and homeowner will be interested in not only the estimated cost, but the error associated with the formula used to calculate it.

Some examples of specific types of error in Minitab include:
Residual error
The variability that remains after all the main effects and interactions are identified.
Standard error of the fits (SE of fits)
The variation in the estimated mean response for a specified set of predictor values, factor levels, or components.
Family error rate
The maximum probability of obtaining one or more confidence intervals that do not contain the true difference between level means.
Type I and Type II error
The probability of rejecting a true hypothesis or accepting a false one.
By using this site you agree to the use of cookies for analytics and personalized content.  Read our policy