The sample size (N) is the total number of observations in each group.
The sample size affects the confidence interval and the power of the test.
Usually, a larger sample yields a narrower confidence interval. A larger sample size also gives the test more power to detect a difference.
The mean of the observations within each group. The mean describes each group with a single value identifying the center of the data. It is the sum of all the observations with a group divided by the number of observations in that group.
The mean of each sample provides an estimate of each population mean. The differences between sample means are the estimates of the difference between the population means.
Because the difference between the group means are based on data from a sample and not the entire population, you cannot be certain it equals the population difference. To get a better sense of the population difference, you can use the confidence interval.
Use the Grouping Information table to quickly determine whether the mean difference between any pair of groups is statistically significant.
The Grouping column contains letters that group the factor levels. Groups that do not share a letter have a mean difference that is statistically significant.
If the table identifies differences that are statistically significant, use the confidence intervals of the differences to determine whether the differences are practically significant.
You can do a multiple comparison analysis for random terms to determine which levels of the term are significantly different from the other levels. For example, if you study the effectiveness of a drug on a particular illness, subject is usually a significant random factor. You can use multiple comparisons to determine whether the drug affected specific subjects in the study differently (perhaps it made one of them more ill).
Use the individual confidence intervals to identify statistically significant differences between the group means, to determine likely ranges for the differences, and to determine whether the differences are practically significant. Fisher's individual tests table displays a set of confidence intervals for the difference between pairs of means.
The individual confidence level is the percentage of times that a single confidence interval includes the true difference between one pair of group means, if you repeat the study. Individual confidence intervals are available only for Fisher's method. All of the other comparison methods produce simultaneous confidence intervals.
Controlling the individual confidence level is uncommon because it does not control the simultaneous confidence level, which often increases to unacceptable levels. If you do not control the simultaneous confidence level, the chance that at least one confidence interval does not contain the true difference increases with the number of comparisons.
The confidence interval of the difference is composed of the following two parts:
Use the confidence intervals to assess the differences between group means.
This value is the difference between the sample means of two groups.
The differences between the sample means of the groups are estimates of the differences between the populations of these groups.
Because each mean difference is based on data from a sample and not from the entire population, you cannot be certain that it equals the population difference. To better understand the differences between population means, use the confidence intervals.
The standard error of the difference between means (SE of Difference) estimates the variability of the difference between sample means that you would obtain if you took repeated samples from the same populations.
Use the standard error of the difference between means to determine how precisely the differences between the sample means estimate the differences between the population means. A lower standard error value indicates a more precise estimate.
Minitab uses the standard error of the difference to calculate the confidence intervals of the differences between means, which is a range of values that is likely to include the population differences.
The degrees of freedom (DF) are the amount of information in your data. Minitab uses the degrees of freedom to calculate the t-test for the difference in means. Minitab only displays the degrees of freedom if you perform comparisons for a mixed effects model.
Use the simultaneous confidence intervals of the difference (95% CI) to identify mean differences that are statistically significant, to determine likely ranges for the differences, and to assess the practical significance of the differences. The table displays a set of confidence intervals for the difference between pairs of means. Confidence intervals that do not contain zero indicate a mean difference that is statistically significant.
The simultaneous confidence level is the percentage of times that a set of confidence intervals includes the true differences for all group comparisons if the study were repeated multiple times.
Controlling the simultaneous confidence level is particularly important when you perform multiple comparisons. If you do not control the simultaneous confidence level, the chance that at least one confidence interval does not contain the true difference increases with the number of comparisons.
For more information, go to Understanding individual and simultaneous confidence levels in multiple comparisons.
The confidence interval of the difference is composed of the following two parts:
You can do a multiple comparison analysis for random terms to determine which levels of the term are significantly different from the other levels. For example, if you study the effectiveness of a drug on a particular illness, subject is usually a significant random factor. You can use multiple comparisons to determine whether the drug affected specific subjects in the study differently (perhaps it made one of them more ill).
Use the confidence intervals to assess the differences between group means.
The t-value is a test statistic that measures the ratio between the difference in means and the standard error of the difference.
You can use the t-value to determine whether to reject the null hypothesis, which states that the difference in means is 0. However, most people use the p-value because it is easier to interpret. For more information on using the critical value, go to Using the t-value to determine whether to reject the null hypothesis.
Minitab uses the t-value to calculate the p-value.
The adjusted p-value indicates which pairs within a family of comparisons are significantly different. The adjustment limits the family error rate to the alpha level that you specify. If you use a regular p-value for multiple comparisons, the family error rate increases with each additional comparison.
It is important to consider the family error rate when making multiple comparisons, because your chances of committing a type I error for a series of comparisons is greater than the error rate for any one comparison alone.
If the adjusted p-value is less than alpha, reject the null hypothesis and conclude that the difference between a pair of group means is statistically significant. The adjusted p-value also represents the smallest family error rate at which a particular null hypothesis is rejected.
Use the confidence intervals to determine likely ranges for the differences and to assess the practical significance of the differences. The graph displays a set of confidence intervals for the difference between pairs of means. Confidence intervals that do not contain zero indicate a mean difference that is statistically significant.
Depending on the comparison method you chose, the plot compares different pairs of groups and displays one of the following types of confidence intervals.
Individual confidence level
The percentage of times that a single confidence interval would include the true difference between one pair of group means if the study were repeated multiple times.
Simultaneous confidence level
The percentage of times that a set of confidence intervals would include the true differences for all group comparisons if the study were repeated multiple times.
Controlling the simultaneous confidence level is particularly important when you perform multiple comparisons. If you do not control the simultaneous confidence level, the chance that at least one confidence interval does not contain the true difference increases with the number of comparisons.
For more information, go to Understanding individual and simultaneous confidence levels in multiple comparisons.
You can do a multiple comparison analysis for random terms to determine which levels of the term are significantly different from the other levels. For example, if you study the effectiveness of a drug on a particular illness, subject is usually a significant random factor. You can use multiple comparisons to determine whether the drug affected specific subjects in the study differently (perhaps it made one of them more ill).
Use the confidence intervals to assess the differences between group means.