In the output, the null and alternative hypotheses help you to verify that you entered the correct value for the hypothesized ratio.
The significance level (denoted as α or alpha) is the maximum acceptable level of risk for rejecting the null hypothesis when the null hypothesis is true (type I error). Usually, you choose the significance level before you analyze the data. In Minitab, you can choose the significance level by specifying the confidence level, because the significance level equals 1 minus the confidence level. Because the default confidence level in Minitab is 0.95, the default significance level is 0.05.
Compare the significance level to the p-value to decide whether to reject or fail to reject the null hypothesis (H_{0}). If the p-value is less than the significance level, the usual interpretation is that the results are statistically significant, and you reject H_{0}.
The sample size (N) is the total number of observations in the sample.
The sample size affects the confidence interval and the power of the test.
Usually, a larger sample size results in a narrower confidence interval. A larger sample size also gives the test more power to detect a difference. For more information, go to What is power?.
The standard deviation is the most common measure of dispersion, or how spread out the data are about the mean. The symbol σ (sigma) is often used to represent the standard deviation of a population, while s is used to represent the standard deviation of a sample. Variation that is random or natural to a process is often referred to as noise.
The standard deviation uses the same units as the data.
The standard deviation of each sample is an estimate of each population standard deviation. Minitab uses the standard deviation to estimate the ratio in population standard deviations. You should concentrate on this ratio.
The variance measures how spread out the data are about their mean. The variance is equal to the standard deviation squared.
The variance of each sample is an estimate of each population variance. Minitab uses the variances to estimate the ratio in population variances. You should concentrate on this ratio.
The ratio of standard deviations is the standard deviation of the first sample divided by the standard deviation of the second sample.
The estimated ratio of standard deviations of your sample data is an estimate of the ratio in population standard deviations.
Because the estimated ratio is based on sample data and not on the entire population, it is unlikely that the sample ratio equals the population ratio. To better estimate the ratio, use the confidence interval.
The ratio of variances is the variance of the first sample divided by the variance of the second sample.
The estimated ratio of variances of your sample data is an estimate of the ratio in population variances.
Because the estimated ratio is based on sample data and not on the entire population, it is unlikely that the sample ratio equals the population ratio. To better estimate the ratio, use the confidence interval.
The confidence interval provides a range of likely values for the population ratio. Because samples are random, two samples from a population are unlikely to yield identical confidence intervals. But, if you repeated your sample many times, a certain percentage of the resulting confidence intervals or bounds would contain the unknown population ratio. The percentage of these confidence intervals or bounds that contain the ratio is the confidence level of the interval. For example, a 95% confidence level indicates that if you take 100 random samples from the population, you could expect approximately 95 of the samples to produce intervals that contain the population ratio.
An upper bound defines a value that the population ratio is likely to be less than. A lower bound defines a value that the population ratio is likely to be greater than.
The confidence interval helps you assess the practical significance of your results. Use your specialized knowledge to determine whether the confidence interval includes values that have practical significance for your situation. If the interval is too wide to be useful, consider increasing your sample size. For more information, go to Ways to get a more precise confidence interval.
By default, the 2 variances test displays the results for Levene's method and Bonett's method. Bonett's method is usually more reliable than Levene's method. However, for extremely skewed and heavy tailed distributions, Levene's method is usually more reliable than Bonett's method. Use the F-test only if you are certain that the data follow a normal distribution. Any small deviation from normality can greatly affect the F-test results. For more information, go to Should I use Bonett's method or Levene's method for 2 Variances?.
Estimated Ratio | 95% CI for Ratio using Bonett | 95% CI for Ratio using Levene |
---|---|---|
0.658241 | (0.372, 1.215) | (0.378, 1.296) |
In these results, the estimate for the population ratio of standard deviations for ratings from two hospitals is 0.658. Using Bonett's method, you can be 95% confident that the population ratio of the standard deviations for the hospital ratings is between 0.372 and 1.215.
The degrees of freedom (DF) are the amount of information your data provide that you can "spend" to estimate the values of unknown population parameters, and calculate the variability of these estimates. For a 2 variances test, the degrees of freedom are determined by the number of observations in your sample and also depend on the method that Minitab uses.
Minitab uses the degrees of freedom to determine the test statistic. The degrees of freedom are determined by the sample size. Increasing your sample size provides more information about the population, which increases the degrees of freedom.
The test statistic is a statistic that Minitab calculates for Bonett's method by inverting the confidence interval. The test statistic for Bonett's method is not available for summarized data or for data that are not balanced.
You can compare the test statistic to critical values of the chi-square distribution to determine whether to reject the null hypothesis. However, using the p-value of the test to make the same determination is usually more practical and convenient. The p-value has the same meaning for any size test, but the same chi-square statistic can indicate opposite conclusions depending on the sample size.
The test statistic is used to calculate the p-value.
The test use the one-way ANOVA F-statistic applied to the absolute median deviation of the observations. Therefore, applying Levene's method is equivalent to applying the one-way ANOVA procedure to the absolute median deviation of the observations. For 2-sample problems this method is also equivalent to applying the 2-sample t procedure to the absolute median deviation of the observations.
You can compare the test statistic to critical values of the F-distribution to determine whether to reject the null hypothesis. However, using the p-value of the test to make the same determination is usually more practical and convenient.
The test statistic is used to calculate the p-value.
The test statistic is a statistic for F-tests that measures the ratio between the observed variances.
You can compare the test statistic to critical values of the F-distribution to determine whether to reject the null hypothesis. However, using the p-value of the test to make the same determination is usually more practical and convenient.
The test statistic is used to calculate the p-value.
The p-value is a probability that measures the evidence against the null hypothesis. A smaller p-value provides stronger evidence against the null hypothesis.
Use the p-value to determine whether the difference in population standard deviations or variances is statistically significant.
For more information, go to Should I use Bonett's method or Levene's method for 2 Variances?.
The summary plot shows confidence intervals for the ratio and confidence intervals for the standard deviations or variances of each sample. The summary plot also shows boxplots of the sample data and p-values for the hypothesis tests.
The confidence interval provides a range of likely values for the population ratio. Because samples are random, two samples from a population are unlikely to yield identical confidence intervals. But, if you repeated your sample many times, a certain percentage of the resulting confidence intervals or bounds would contain the unknown population ratio. The percentage of these confidence intervals or bounds that contain the ratio is the confidence level of the interval. For example, a 95% confidence level indicates that if you take 100 random samples from the population, you could expect approximately 95 of the samples to produce intervals that contain the population ratio.
An upper bound defines a value that the population ratio is likely to be less than. A lower bound defines a value that the population ratio is likely to be greater than.
The confidence interval helps you assess the practical significance of your results. Use your specialized knowledge to determine whether the confidence interval includes values that have practical significance for your situation. If the interval is too wide to be useful, consider increasing your sample size. For more information, go to Ways to get a more precise confidence interval.
By default, the 2 variances test displays the results for Levene's method and Bonett's method. Bonett's method is usually more reliable than Levene's method. However, for extremely skewed and heavy tailed distributions, Levene's method is usually more reliable than Bonett's method. Use the F-test only if you are certain that the data follow a normal distribution. Any small deviation from normality can greatly affect the F-test results. For more information, go to Should I use Bonett's method or Levene's method for 2 Variances?.
A boxplot provides a graphical summary of the distribution of each sample. The boxplot makes it easy to compare the shape, the central tendency, and the variability of the samples.
Use a boxplot to examine the spread of the data and to identify any potential outliers. Boxplots are best when the sample size is greater than 20.
Examine the spread of your data to determine whether your data appear to be skewed. When data are skewed, the majority of the data are located on the high or low side of the graph. Often, skewness is easiest to detect with a histogram or boxplot.
Data that are severely skewed can affect the validity of the p-value if your sample is small (either sample is less than 20 values). If your data are severely skewed and you have a small sample, consider increasing your sample size.
Outliers, which are data values that are far away from other data values, can strongly affect the results of your analysis. Often, outliers are easiest to identify on a boxplot.
Try to identify the cause of any outliers. Correct any data–entry errors or measurement errors. Consider removing data values for abnormal, one-time events (also called special causes). Then, repeat the analysis. For more information, go to Identifying outliers.
An individual value plot displays the individual values in each sample. An individual value plot makes it easy to compare the samples. Each circle represents one observation. An individual value plot is especially useful when you have relatively few observations and when you also need to assess the effect of each observation.
Use an individual value plot to examine the spread of the data and to identify any potential outliers. Individual value plots are best when the sample size is less than 50.
Examine the spread of your data to determine whether your data appear to be skewed. When data are skewed, the majority of the data are located on the high or low side of the graph. Often, skewness is easiest to detect with a histogram or boxplot.
Data that are severely skewed can affect the validity of the p-value if your sample is small (either sample is less than 20 values). If your data are severely skewed and you have a small sample, consider increasing your sample size.
Outliers, which are data values that are far away from other data values, can strongly affect the results of your analysis. Often, outliers are easiest to identify on a boxplot.
Try to identify the cause of any outliers. Correct any data–entry errors or measurement errors. Consider removing data values for abnormal, one-time events (also called special causes). Then, repeat the analysis. For more information, go to Identifying outliers.
A histogram divides sample values into many intervals and represents the frequency of data values in each interval with a bar.
Use a histogram to assess the shape and spread of the data. Histograms are best when the sample size is greater than 20.
Examine the spread of your data to determine whether your data appear to be skewed. When data are skewed, the majority of the data are located on the high or low side of the graph. Often, skewness is easiest to detect with a histogram or boxplot.
Data that are severely skewed can affect the validity of the p-value if your sample is small (either sample is less than 20 values). If your data are severely skewed and you have a small sample, consider increasing your sample size.
Outliers, which are data values that are far away from other data values, can strongly affect the results of your analysis. Often, outliers are easiest to identify on a boxplot.
Try to identify the cause of any outliers. Correct any data–entry errors or measurement errors. Consider removing data values for abnormal, one-time events (also called special causes). Then, repeat the analysis. For more information, go to Identifying outliers.