The Anderson-Darling goodness-of-fit statistic (A-Squared) measures the area between the fitted line (based on the normal distribution) and the empirical distribution function (which is based on the data points). The Anderson-Darling statistic is a squared distance that is weighted more heavily in the tails of the distribution.
Minitab uses the Anderson-Darling statistic to calculate the p-value. The p-value is a probability that measures the evidence against the null hypothesis. A smaller p-value provides stronger evidence against the null hypothesis. A smaller value for the Anderson-Darling statistic indicates that the data follow the normal distribution more closely.
The p-value is a probability that measures the evidence against the null hypothesis. A smaller p-value provides stronger evidence against the null hypothesis.
Use the p-value to determine whether the data do not follow a normal distribution.
The mean is the average of the data, which is the sum of all the observations divided by the number of observations.
Use the mean to describe the sample with a single value that represents the center of the data. Many statistical analyses use the mean as a standard measure of the center of the distribution of the data.
The standard deviation is the most common measure of dispersion, or how spread out the data are about the mean. The symbol σ (sigma) is often used to represent the standard deviation of a population, while s is used to represent the standard deviation of a sample. Variation that is random or natural to a process is often referred to as noise.
Because the standard deviation is in the same units as the data, it is usually easier to interpret than the variance.
Use the standard deviation to determine how spread out the data are from the mean. A higher standard deviation value indicates greater spread in the data. A good rule of thumb for a normal distribution is that approximately 68% of the values fall within one standard deviation of the mean, 95% of the values fall within two standard deviations, and 99.7% of the values fall within three standard deviations.
The variance measures how spread out the data are about their mean. The variance is equal to the standard deviation squared.
The greater the variance, the greater the spread in the data.
Because variance (σ^{2}) is a squared quantity, its units are also squared, which may make the variance difficult to use in practice. The standard deviation is usually easier to interpret because it's in the same units as the data. For example, a sample of waiting times at a bus stop may have a mean of 15 minutes and a variance of 9 minutes^{2}. Because the variance is not in the same units as the data, the variance is often displayed with its square root, the standard deviation. A variance of 9 minutes^{2} is equivalent to a standard deviation of 3 minutes.
Skewness is the extent to which the data are not symmetrical.
Kurtosis indicates how the tails of a distribution differ from the normal distribution.
The number of non-missing values in the sample.
Total count | N | N* |
---|---|---|
149 | 141 | 8 |
The minimum is the smallest data value.
In these data, the minimum is 7.
13 | 17 | 18 | 19 | 12 | 10 | 7 | 9 | 14 |
Use the minimum to identify a possible outlier or a data-entry error. One of the simplest ways to assess the spread of your data is to compare the minimum and maximum. If the minimum value is very low, even when you consider the center, the spread, and the shape of the data, investigate the cause of the extreme value.
Quartiles are the three values—the 1^{st} quartile at 25% (Q1), the second quartile at 50% (Q2 or median), and the third quartile at 75% (Q3)— that divide a sample of ordered data into four equal parts.
The 1^{st} quartile is the 25^{th} percentile and indicates that 25% of the data are less than or equal to this value.
The median is the midpoint of the data set. This midpoint value is the point at which half the observations are above the value and half the observations are below the value. The median is determined by ranking the observations and finding the observation that are at the number [N + 1] / 2 in the ranked order. If the number of observations are even, then the median is the average value of the observations that are ranked at numbers N / 2 and [N / 2] + 1.
Quartiles are the three values—the 1^{st} quartile at 25% (Q1), the second quartile at 50% (Q2 or median), and the third quartile at 75% (Q3)— that divide a sample of ordered data into four equal parts.
The third quartile is the 75^{th} percentile and indicates that 75% of the data are less than or equal to this value.
The maximum is the largest data value.
In these data, the maximum is 19.
13 | 17 | 18 | 19 | 12 | 10 | 7 | 9 | 14 |
Use the maximum to identify a possible outlier or a data-entry error. One of the simplest ways to assess the spread of your data is to compare the minimum and maximum. If the maximum value is very high, even when you consider the center, the spread, and the shape of the data, investigate the cause of the extreme value.
The confidence interval provides a range of likely values for the population parameter. Because samples are random, two samples from a population are unlikely to yield identical confidence intervals. But, if you repeated your sample many times, a certain percentage of the resulting confidence intervals or bounds would contain the unknown population parameter. The percentage of these confidence intervals or bounds that contain the parameter is the confidence level of the interval. For example, a 95% confidence level indicates that if you take 100 random samples from the population, you could expect approximately 95 of the samples to produce intervals that contain the population parameter.
An upper bound defines a value that the population parameter is likely to be less than. A lower bound defines a value that the population parameter is likely to be greater than.
The confidence interval helps you assess the practical significance of your results. Use your specialized knowledge to determine whether the confidence interval includes values that have practical significance for your situation. If the interval is too wide to be useful, consider increasing your sample size. For more information, go to Ways to get a more precise confidence interval.
A histogram divides sample values into many intervals and represents the frequency of data values in each interval with a bar.
Use a histogram to assess the shape and spread of the data. Histograms are best when the sample size is greater than 20.
You can use a histogram of the data overlaid with a normal curve to examine the normality of your data. A normal distribution is symmetric and bell-shaped, as indicated by the curve. It is often difficult to evaluate normality with small samples. A probability plot is best for determining the distribution fit.
Outliers, which are data values that are far away from other data values, can strongly affect the results of your analysis. Often, outliers are easiest to identify on a boxplot.
Try to identify the cause of any outliers. Correct any data–entry errors or measurement errors. Consider removing data values for abnormal, one-time events (also called special causes). Then, repeat the analysis. For more information, go to Identifying outliers.
Multi-modal data have multiple peaks, also called modes. Multi-modal data often indicate that important variables are not yet accounted for.
If you have additional information that allows you to classify the observations into groups, you can create a group variable with this information. Then, you can create the graph with groups to determine whether the group variable accounts for the peaks in the data.
A boxplot provides a graphical summary of the distribution of a sample. The boxplot shows the shape, central tendency, and variability of the data.
Use a boxplot to examine the spread of the data and to identify any potential outliers. Boxplots are best when the sample size is greater than 20.
Examine the spread of your data to determine whether your data appear to be skewed. When data are skewed, the majority of the data are located on the high or low side of the graph. Often, skewness is easiest to detect with a histogram or boxplot.
Outliers, which are data values that are far away from other data values, can strongly affect the results of your analysis. Often, outliers are easiest to identify on a boxplot.
Try to identify the cause of any outliers. Correct any data–entry errors or measurement errors. Consider removing data values for abnormal, one-time events (also called special causes). Then, repeat the analysis. For more information, go to Identifying outliers.