Use the Bonferroni confidence intervals to estimate the standard deviation of each population based on your categorical factors. Each confidence interval is a range of likely values for the standard deviation of the corresponding population. The Bonferroni confidence intervals are adjusted to maintain the simultaneous confidence level.
Controlling the simultaneous confidence level is particularly important when you assess multiple confidence intervals. If you do not control the simultaneous confidence level, the chance that at least one confidence interval does not contain the true standard deviation increases with the number of confidence intervals.
For more information, go to Understanding individual and simultaneous confidence levels in multiple comparisons and What is the Bonferroni method?.
The Bonferroni confidence intervals cannot be used to determine differences between the groups. To determine differences between groups, use the pvalues and the multiple comparison intervals on the summary plot.
In these results, the Bonferroni confidence intervals indicate that you can be 95% confident that the entire set of confidence intervals includes the true population standard deviations for all groups. Also, the individual confidence level indicates how confident you can be that an individual confidence interval contains the population standard deviation of that specific group. For example, you can be 99.1667% confident that the standard deviation for the population of advanced drivers that drive on dirt roads is within the confidence interval (0.453, 168.555).
 
 

A boxplot provides a graphical summary of the distribution of each sample. The boxplot makes it easy to compare the shape, the central tendency, and the variability of the samples.
Use a boxplot to examine the spread of the data and to identify any potential outliers. Boxplots are best when the sample size is greater than 20.
Examine the spread of your data to determine whether your data appear to be skewed. When data are skewed, the majority of the data are located on the high or low side of the graph. Skewed data indicates that the data might not be normally distributed. Often, skewness is easiest to detect with an individual value plot, a histogram, or a boxplot.
Data that are severely skewed can affect the validity of the pvalue if your sample is small (< 20 values). If your data are severely skewed and you have a small sample, consider increasing your sample size.
Outliers, which are data values that are far away from other data values, can strongly affect your results. Often, outliers are easiest to identify on a boxplot.
Try to identify the cause of any outliers. Correct any dataentry errors or measurement errors. Consider removing data values for abnormal, onetime events (special causes). Then, repeat the analysis.
The individual confidence level is the percentage of times that a single confidence interval includes the true standard deviation for one group if you repeat the study multiple times.
As you increase the number of confidence intervals in a set, the chance that at least one confidence interval does not contain the true standard deviation increases. The simultaneous confidence level indicates how confident you can be that the entire set of confidence intervals includes the true population standard deviations for all groups.
You can be 99.1667% confident that each individual confidence interval contains the population standard deviation for that specific group. For example, you can be 99.1667% confident that the standard deviation for the population of advanced drivers that drive on dirt roads is within the confidence interval (0.453, 168.555). However, because there are six confidence intervals in the set, you can be only 95% confident that all of the intervals contain the true values.
 
 

An individual value plot displays the individual values in each sample. The individual value plot makes it easy to compare the samples. Each circle represents one observation. An individual value plot is especially useful when your sample size is small.
Use an individual value plot to examine the spread of the data and to identify any potential outliers. Individual value plots are best when the sample size is less than 50.
Examine the spread of your data to determine whether your data appear to be skewed. When data are skewed, the majority of the data are located on the high or low side of the graph. Skewed data indicate that the data might not be normally distributed. Often, skewness is easiest to detect with an individual value plot, a histogram, or a boxplot.
Outliers, which are data values that are far away from other data values, can strongly affect your results. Often, outliers are easy to identify on an individual value plot.
Try to identify the cause of any outliers. Correct any dataentry errors or measurement errors. Consider removing data values for abnormal, onetime events (special causes). Then, repeat the analysis.
The sample size (N) is the total number of observations in each group.
The sample size affects the confidence interval and the power of the test.
Usually, a larger sample yields a narrower confidence interval. A larger sample size also gives the test more power to detect a difference. For more information, go to What is power?.
The test for equal variances is a hypothesis test that evaluates two mutually exclusive statements about two or more population standard deviations. These two statements are called the null hypothesis and the alternative hypotheses. A hypothesis test uses sample data to determine whether to reject the null hypothesis.
Compare the pvalue to the significance level to determine whether to reject the null hypothesis.
 
 

The pvalue is a probability that measures the evidence against the null hypothesis. Lower probabilities provide stronger evidence against the null hypothesis.
Use the pvalue to determine whether any of the differences between the standard deviations are statistically significant. Minitab displays the results of either one or two tests that assess the equality of variances. If you have two pvalues and they disagree, see the section on "Tests".
To determine whether any of the differences between the standard deviations are statistically significant, compare the pvalue to your significance level to assess the null hypothesis. The null hypothesis states that the group means are all equal. Usually, a significance level (denoted as α or alpha) of 0.05 works well. A significance level of 0.05 indicates a 5% risk of concluding that a difference exists when there is no actual difference.
 
 

The standard deviation is the most common measure of dispersion, or how spread out the data are around the mean. The symbol σ (sigma) is often used to represent the standard deviation of a population. The symbol s is used to represent the standard deviation of a sample.
The standard deviation uses the same units as the variable. A higher standard deviation value indicates greater spread in the data. A guideline for data that follow the normal distribution is that approximately 68% of the values fall within one standard deviation of the mean, 95% of the values fall within two standard deviations, and 99.7% of the values fall within three standard deviations.
The summary plot shows intervals for the equal variances tests. The type of intervals that Minitab displays depends on whether you selected Use test and confidence intervals based on normal distribution on the Data tab and the number of groups in your data.
If you did not select Use test and confidence intervals based on normal distribution, the summary plot displays comparison intervals based on the multiple comparisons method.
If you selected Use test and confidence intervals based on normal distribution and have two groups, Minitab performs the Ftest. If you have 3 or more groups, Minitab performs Bartlett's test. For either of these tests, the plot also displays Bonferroni confidence intervals.
If you did not check Use test and confidence intervals based on normal distribution, the summary plot displays multiple comparison intervals.
If it is valid for you to use the multiple comparison pvalue, you can use the multiple comparison confidence intervals to identify specific pairs of groups which have a difference that is statistically significant. If two intervals do not overlap, the difference between the corresponding standard deviations is statistically significant.
If the properties of your data require that you use Levene's method, do not assess the confidence intervals on the summary plot.
For information about which test to use, go to the section "Tests".
If you selected Use test and confidence intervals based on normal distribution, the summary plot displays Bonferroni confidence intervals.
Use the Bonferroni confidence intervals to estimate the standard deviation of each population based on your categorical factor(s). Each confidence interval is a range of likely values for the standard deviation of the corresponding population. The Bonferroni confidence intervals are adjusted to maintain the simultaneous confidence level.
Controlling the simultaneous confidence level is particularly important when you assess multiple confidence intervals. If you do not control the simultaneous confidence level, the chance that at least one confidence interval does not contain the true standard deviation increases with the number of confidence intervals.
For more information, go to Understanding individual and simultaneous confidence levels in multiple comparisons and What is the Bonferroni method?.
The Bonferroni confidence intervals cannot be used to determine differences between the groups. Use the pvalue in the output to determine whether any of the differences between the standard deviations are statistically significant.
The types of tests for equal variances that Minitab displays depends on whether you selected Use test based on normal distribution on the Options tab and the number of groups in your data.
If you did not select Use test based on normal distribution, Minitab displays test results for both the multiple comparisons method and Levene's method. For most continuous distributions, both methods give you a type 1 error rate that is close to your significance level (denoted as α or alpha). The multiple comparisons method is usually more powerful. If the pvalue for the multiple comparisons method is significant, you can use the summary plot to identify specific populations that have standard deviations that are different from each other.
If the pvalue for the multiple comparisons test is less than your chosen significance level, the differences between some of the standard deviations are statistically significant. Use the multiple comparison intervals to determine which standard deviations are significantly different from each other. If two intervals do not overlap, then the corresponding standard deviations (and variances) are significantly different.
When you have small samples from very skewed, or heavytailed distributions, the type I error rate for the multiple comparisons method can be higher than α. Under these conditions, if Levene's method gives you a smaller pvalue than the multiple comparisons method, base your conclusions on Levene's method.
If you select Use test based on normal distribution and you have two groups, Minitab performs the Ftest. If you have 3 or more groups, Minitab performs Bartlett's test.
The Ftest and Bartlett's test are accurate only for normally distributed data. Any departure from normality can cause these tests to yield inaccurate results. However, if the data conform to the normal distribution, the Ftest and Bartlett's test are typically more powerful than either the multiple comparisons method or Levene's method.
If the pvalue for the test is less than your significance level, the differences between some of the standard deviations are statistically significant.
The multiple comparisons test does not use a test statistic.
Minitab uses the test statistic to calculate the pvalue, which you use to make a decision about the statistical significance of the differences between standard deviations. The pvalue is a probability that measures the evidence against the null hypothesis. Lower probabilities provide stronger evidence against the null hypothesis.
A sufficiently high test statistic indicates that the difference between some of the standard deviations is statistically significant.
You can use the test statistic to determine whether to reject the null hypothesis. However, the pvalue is used more often because it is easier to interpret.