Interpret the key results for Mann-Whitney Test

Complete the following steps to interpret a Mann-Whitney test. Key output includes the estimate for difference, the confidence interval, and the p-value.

Step 1: Determine a confidence interval for difference between two population medians

First, consider the difference in the sample medians, and then examine the confidence interval.

The difference is an estimate of the difference in population medians. Because this value is based on sample data and not on the entire population, it is unlikely that the sample difference equals the population difference. To better estimate the population difference, use the confidence interval for the difference.

The confidence interval provides a range of likely values for the difference between two population medians. For example, a 95% confidence level indicates that if you take 100 random samples from the population, you could expect approximately 95 of the samples to produce intervals that contain the population difference. The confidence interval helps you assess the practical significance of your results. Use your specialized knowledge to determine whether the confidence interval includes values that have practical significance for your situation. If the interval is too wide to be useful, consider increasing your sample size.

The Mann-Whitney test does not always achieve the confidence interval that you specify because the Mann-Whitney statistic (W) is discrete. Minitab calculates the closest achievable confidence level.

Estimation for Difference
Difference
Achieved Confidence
Key Results: Difference, CI for Difference

In these results, the point estimate of the population median for the difference in the number of months that paint persists on two highways is –1.85. You can be 95.5% confident that the difference between the population medians is between –3.0 and –0.9.

Step 2: Determine whether the difference is statistically significant

To determine whether the difference between the medians is statistically significant, compare the p-value to the significance level. Usually, a significance level (denoted as α or alpha) of 0.05 works well. A significance level of 0.05 indicates a 5% risk of concluding that a difference exists when there is no actual difference.
P-value ≤ α: The difference between the medians is statistically significant (Reject H0)
If the p-value is less than or equal to the significance level, the decision is to reject the null hypothesis. You can conclude that the difference between the population medians is statistically significant. Use your specialized knowledge to determine whether the difference is practically significant. For more information, go to Statistical and practical significance.
P-value > α: The difference between the medians is not statistically significant (Fail to reject H0)
If the p-value is greater than the significance level, the decision is to fail to reject the null hypothesis. You do not have enough evidence to conclude that the difference between the population medians is statistically significantly. You should make sure that your test has enough power to detect a difference that is practically significant. For more information, go to Increase the power of a hypothesis test.

A tie occurs when the same value is in both samples. If your data has ties, Minitab displays a p-value that is adjusted for ties and a p-value that is not adjusted. The adjusted p-value is usually more accurate than the unadjusted p-value. However, the unadjusted p-value is the more conservative estimate because it is always greater than the adjusted p-value for a specific pair of samples.

Test
Method
W-Value
P-Value
Key Result: P-Value

In these results, the null hypothesis states that the difference in the median time that two brands of paint persist on a highway is 0. Because the p-value is 0.0019, which is less than the significance level of 0.05, the decision is to reject the null hypothesis and conclude that the time that the two brands of paint persist are different.

Step 3: Identify outliers

Outliers, which are data values that are far away from other data values, can strongly affect the results of your analysis. Often, outliers are easiest to identify on a boxplot.

On a boxplot, asterisks (*) denote outliers.

Try to identify the cause of any outliers. Correct any data–entry errors or measurement errors. Consider removing data values for abnormal, one-time events (also called special causes). Then, repeat the analysis. For more information, go to Identifying outliers.

In these boxplots, there are no outliers.

By using this site you agree to the use of cookies for analytics and personalized content.  Read our policy