Interpret the key results for Power and Sample Size for Paired t

Complete the following steps to interpret Power and Sample Size for Paired t. Key output includes the difference, the sample size, the power, and the power curve.

Step 1: Examine the calculated values

Using the values of the two power function variables that you entered, Minitab calculates the difference, the sample size, or the power of the test.

Difference

Minitab calculates the minimum difference for which you can achieve the specified level of power for each sample size. Larger sample sizes enable the test to detect smaller differences. You want to detect the smallest difference that has practical consequences for your application.

This value represents the difference between the population means of the paired observations.

Sample size

Minitab calculates how large your sample must be for a test with your specified power to detect each specified difference. Because sample sizes are whole numbers, the actual power of the test might be slightly greater than the power value that you specify.

If you increase the sample size, the power of the test also increases. You want enough observations in your sample to achieve adequate power. But you don't want a sample size so large that you waste time and money on unnecessary sampling or detect unimportant differences to be statistically significant.

Power

Minitab calculates the power of the test based on the specified difference and sample size. A power value of 0.9 is usually considered adequate. A value of 0.9 indicates you have a 90% chance of detecting a difference between the population paired means when a difference actually exists. If a test has low power, you might fail to detect a difference and mistakenly conclude that none exists. Usually, when the sample size is smaller or the difference is smaller, the test has less power to detect a difference.

Results Sample Difference Size Power 3 10 0.395918 3 20 0.721005 3 50 0.986031
Key Results: Difference, Sample Size, Power

These results show that if the difference is 3 and the sample sizes are 10, 20, and 50, then the power of the test is approximately 0.4, 0.72, and 0.99, for each sample size respectively. A sample size of 20 or fewer does not give the test adequate power to detect a difference of 3, while a sample size of 50 may give the test too much power.

Step 2: Examine the power curve

Use the power curve to assess the appropriate sample size or power for your test.

The power curve represents every combination of power and difference for each sample size when the significance level and the standard deviation are held constant. Each symbol on the power curve represents a calculated value based on the values that you enter. For example, if you enter a sample size and a power value, Minitab calculates the corresponding difference and displays the calculated value on the graph.

Examine the values on the curve to determine the difference between the paired means that can be detected at a certain power value and sample size. A power value of 0.9 is usually considered adequate. However, some practitioners consider a power value of 0.8 to be adequate. If a hypothesis test has low power, you might fail to detect a difference that is practically significant. If you increase the sample size, the power of the test also increases. You want enough observations in your sample to achieve adequate power. But you don't want a sample size so large that you waste time and money on unnecessary sampling or detect unimportant differences to be statistically significant. If you decrease the size of the difference that you want to detect, the power also decreases.

In this graph, the power curve for a sample size of 10 shows that the test has a power of approximately 0.4 for a difference of 3. The power curve for a sample size of 20 shows that the test has a power of approximately 0.72 for a difference of 3. The power curve for a sample size of 50 shows that the test has a power of approximately 0.99 for a difference of 3. As the difference approaches 0, the power of the test decreases and approaches α (also called the significance level), which is 0.05 for this analysis.

By using this site you agree to the use of cookies for analytics and personalized content.  Read our policy