Find definitions and interpretation guidance for every statistic and graph that is provided with Power and Sample Size for 1 Proportion.

The significance level (denoted as α or alpha) is the maximum acceptable level of risk for rejecting the null hypothesis when the null hypothesis is true (type I error). Alpha is also interpreted as the power of the test when the null hypothesis (H_{0}) is true. Usually, you choose the significance level before you analyze the data. The default significance level is 0.05.

Use the significance level to minimize the power value of the test when the null hypothesis (H_{0}) is true. Higher values for the significance level give the test more power, but also increase the chance of making a type I error, which is rejecting the null hypothesis when it is true.

The comparison proportion is the value you want to compare with the hypothesized proportion.

Minitab calculates the comparison proportion. The difference between the comparison proportion and the hypothesized proportion is the minimum difference for which you can achieve the specified level of power for each sample size. Larger sample sizes enable the test to detect smaller differences. You want to detect the smallest difference that has practical consequences for your application.

To more fully investigate the relationship between the sample size and the comparison proportion at a given power, use the power curve.

The sample size is the total number of observations in the sample.

Use the sample size to estimate how many observations you need to obtain a certain power value for the hypothesis test at a specific difference.

Minitab calculates how large your sample must be for a test with your specified power to detect the difference between the hypothesized proportion and comparison proportion. Because sample sizes are whole numbers, the actual power of the test might be slightly greater than the power value that you specify.

If you increase the sample size, the power of the test also increases. You want enough observations in your sample to achieve adequate power. But you don't want a sample size so large that you waste time and money on unnecessary sampling or detect unimportant differences to be statistically significant.

To more fully investigate the relationship between the sample size and the difference at a given power, use the power curve.

The power of a hypothesis test is the probability that the test correctly rejects the null hypothesis. The power of a hypothesis test is affected by the sample size, the difference, the variability of the data, and the significance level of the test.

For more information, go to What is power?.

Minitab calculates the power of the test based on the specified comparison proportion and sample size. A power value of 0.9 is usually considered adequate. A value of 0.9 indicates you have a 90% chance of detecting a difference between the hypothesized proportion and the population comparison proportion when a difference actually exists. If a test has low power, you might fail to detect a difference and mistakenly conclude that none exists. Usually, when the sample size is smaller or the difference is smaller, the test has less power to detect a difference.

If you enter a comparison proportion and a power value for the test, then Minitab calculates how large your sample must be. Minitab also calculates the actual power of the test for that sample size. Because sample sizes are whole numbers, the actual power of the test might be slightly greater than the power value that you specify.

The power curve plots the power of the test versus the comparison proportion.

Use the power curve to assess the appropriate sample size or power for your test.

The power curve represents every combination of power and comparison proportion for each sample size when the significance level is held constant. Each symbol on the power curve represents a calculated value based on the values that you enter. For example, if you enter a sample size and a power value, Minitab calculates the corresponding comparison proportion and displays the calculated value on the graph.

Examine the values on the curve to determine the difference between the comparison proportion and the hypothesized proportion that can be detected at a certain power value and sample size. A power value of 0.9 is usually considered adequate. However, some practitioners consider a power value of 0.8 to be adequate. If a hypothesis test has low power, you might fail to detect a difference that is practically significant. If you increase the sample size, the power of the test also increases. You want enough observations in your sample to achieve adequate power. But you don't want a sample size so large that you waste time and money on unnecessary sampling or detect unimportant differences to be statistically significant. If you decrease the size of the difference that you want to detect, the power also decreases.

In this graph, the power curve for a sample size of 500 shows that the test has a power of 0.431 for a comparison proportion of 0.045 and a power of 0.449 for a comparison proportion of 0.085. For a sample size of 1000, the power curve shows that the test has a power of 0.764 for a comparison proportion of 0.045 and a power of 0.704 for a comparison proportion of 0.085. Because the power of the test is not adequate to detect a difference between the comparison proportion and baseline proportion of 0.065, try to increase the sample size, if possible. You can also use the power curve to determine different values that correspond to an adequate level of power with the specified sample size.