- What is the difference?
- What minimum difference value do I use in a power and sample size analysis for a Z-test or t-test?
- What minimum difference values do I use in a power and sample size analysis for one-way ANOVA?
- What minimum effect value should I use in a power and sample size analysis for a factorial design or a Plackett-Burman design?
- What value should I use for the minimum difference in a general full factorial design?

Difference is the smallest difference that you are interested in detecting between the hypothesized value of a population parameter and the actual value. You do not know the actual value, usually because you cannot measure all the units in the population. Difference is also known as population effect, or simply, effect.

Difference affects the power of hypothesis tests and ANOVA (analysis of variance) studies. Before you collect data for a hypothesis test or an ANOVA, you can perform a power and sample size analysis to determine whether the power is high enough to detect the difference.

In the main dialog box, you need to specify the minimum difference you are interested in detecting. The way in which you express this difference depends on whether you are conducting a one-sample or two-sample test:

- For a 1-sample Z or 1-sample t-test, express the difference in terms of the null hypothesis. For example, suppose you are testing whether or not your students' mean test score is different from the null value. If you would like to detect a difference of three points, you would enter 3 in Differences.
- For a 2-sample t-test, express the difference as the difference between the population means that you would like to be able to detect. For example, suppose you are investigating the effects of water acidity on the growth of two populations of tadpoles. If you are interested in differences of 4 mm or more, you would enter 4 in Differences.
- For a paired t-test, express the difference as the difference between the population paired means that you would like to be able to detect. For example, suppose you are investigating the effects of a SAT preparatory program on the SAT math scores of a group of students. If you are interested in score differences of 100 or more, you would enter 100 in Differences.

When estimating sample size, if you choose Less than as your alternative hypothesis, then you must enter a negative value in Differences. If you choose Greater than, you must enter a positive value.

In order to calculate power or sample size, you need to estimate the difference between the smallest and largest actual factor level means. For example, suppose you are planning an experiment with four treatment conditions (four factor levels). You want to detect a difference between a control group mean of 10 and a level mean that is 15. In this case, you want to be able to detect a difference of at least 5.

When calculating power or number of replicates, you need to specify the minimum effect you are interested in detecting. You express this effect as the difference between the low and high factor level means. For example, suppose you are trying to determine the effect of column temperature on the purity of your product. You are only interested in detecting a difference in purity that is greater than 0.007 between the low and high levels of temperature. In the dialog box, enter 0.007 in Effects.

Specify the difference between the smallest and largest levels of the main effects. To provide conservative results, Minitab bases the power and sample size analysis on the main effect that has the largest number of levels.