Multiple comparison intervals for k > 2 samples
Let be k > 2 independent samples, each sample being independent and identically distributed with mean and variance . And, suppose that the samples originate from populations that have a common kurtosis.
And let be a pooled kurtosis estimator for the pair of samples ( i, j) given as:
Let be the upper a point of the range of k variables that are independent and identically distributed on a standard normal random distribution. That is, satisfies the following:
where Z1, ..., Zk are independent and identically distributed standard normal random variables. Barnard (1978) provides a simple numerical algorithm based on a 16-point Gaussian quadrature for calculating the distribution function of the normal range.
The multiple comparison procedure rejects the null hypothesis of equality of variances (also called homogeneity of variances) if, and only if, at least one pair of the following intervals do not overlap:
where ri = (ni - 3) / ni .
We refer to the above intervals as multiple comparison intervals or MC intervals. The MC intervals for each sample are not to be interpreted as confidence intervals for the standard deviations of the parent populations. Hochberg et al. (1982) refer to similar intervals for comparing means as "uncertainty intervals". The MC intervals given here are useful only for comparing the standard deviations or variances for multi-sample designs. When the overall multiple comparison test is significant, the standard deviations that correspond to the non-overlapping intervals are statistically different. (For the detailed derivation of these intervals, go to the white paper on Multiple comparison methods.)
Multiple comparison intervals for k = 2 samples
When there are only two samples, the multiple comparison intervals are given by:
where zα / 2 is the upper α / 2 percentile point of the standard normal distribution, ci = ni / ni - zα / 2 , and Vi is given by the following formula:
P-value for the test
If there are 2 samples in the design, then Minitab calculates the p-value for the multiple comparisons test using Bonett's method for a 2 variances test and a hypothesized ratio, Ρo , of 1.
If there are k > 2 samples in the design, then let Pi j be the p-value of the test for any pair (i, j ) of samples. The p-value for the multiple comparisons procedure as an overall test of equality of variances is given by the following:
For more information, including simulations and detailed algorithms for calculating Pi j , go to Bonett's Method, which is a white paper that has simulations and other information about Bonett's Method.
|ni||the number of observations in sample i|
|Y i l ||the lth observation in sample i |
| mi||the trimmed mean for sample i with trim proportions of |
|k||the number of samples|
|Si||the standard deviation of sample i |
|α||the significance level for the test = 1 - (the confidence level / 100)|
|Zα / 2||the upper α / 2 percentile point of the standard normal distribution|