Find definitions and interpretation guidance for every statistic and graph that is provided with Power and Sample Size for
Plackett-Burman Design.

The significance level (denoted by alpha or α) is the maximum acceptable level of risk for a type I error.

Use the significance level to decide whether an effect is statistically significant. Because the significance level is the threshold for statistical significance, a higher value increases the chance of making a type I error. A type I error is the incorrect conclusion that an effect is statistically significant.

The assumed standard deviation is the estimate of the standard deviation of the response measurements at replicated experimental runs. If you already performed an analysis in minitab that produced an ANOVA table, you can use the square root of the adjusted mean square for error.

Use the assumed standard deviation to describe how variable the data are. Higher values of the assumed standard deviation indicate more variation or "noise" in the data, which decreases the statistical power of a design.

The number shows how many factors are in the design.

Use the number of factors to verify that the design has all of the factors that you need to study. Factors are the variables that you control in the experiment. Factors are also known as independent variables, explanatory variables, and predictor variables. For the power and sample size calculations, all factors are numeric. Numeric factors use a few controlled values in the experiment, even though many values are possible. These values are known as factor levels.

For example, you are studying factors that could affect plastic strength during the manufacturing process. You decide to include Temperature in your experiment. Because temperature is a factor, only three temperatures settings are in the experiment: 100 °C, 150 °C, and 200 °C.

The number shows how many corner points are in one replicate.

Use this number to identify the Plackett-Burman design that the power calculations use. A corner point is an experimental run where the factors are at either their high or low levels.

The number shows how many center points are in the design.

Use the number of center points to see the effect of different numbers of center points on the results. Center points are runs where all of the factors are set midway between their low and high levels.

Center points usually have a small influence on the results when the design includes replicates of the corner points. Center points have other uses besides their influence on the power calculations. For example, the test for curvature in the response requires center points.

In these results, the points on the power curves show calculations for a difference of 3. The design with 1 replicate and no center points has a power close to 0.5. The design with 1 replicate and 6 center points has a power of almost 0.9. With two replicates, the power curves for 0 center points and 6 center points are indistinguishable on the graph. The curve for 6 center points is slightly higher for nonzero effects. The power values are both close to 1.

Plackett-Burman Design
α = 0.05 Assumed standard deviation = 1.8
Method
Factors: 17 Design: 20
Including a term for center points in model.

Results
Center Total
Points Effect Reps Runs Power
0 3 1 20 0.517308
0 3 2 40 0.998927
6 3 1 26 0.889603
6 3 2 46 0.999082

If you enter the number of replicates, the power value, and the number of center points, Minitab calculates the effect. The effect is the difference in the response between the high and low levels of a factor that you want the design to detect. This difference is the result of one factor alone (main effect).

Use the effect size to determine the ability of the design to detect an effect. If you enter a number of replicates, a power, and a number of center points, then Minitab calculates the smallest effect size that the design can detect with the specified power. Usually, more replicates allow a designed experiment to detect smaller effects.

In these results, the design with one replicate can detect a difference of about 0.015 with 80% power. The difference that the design can detect with 90% power is larger than 0.015, about 0.018. The design with 2 replicates can detect a difference that is smaller than 0.015 with 80% power, about 0.007.

Plackett-Burman Design
α = 0.05 Assumed standard deviation = 0.01017
Method
Factors: 31 Design: 32
Center pts (total): 4
Including a term for center points in model.

Results
Center Total
Points Reps Runs Power Effect
4 1 36 0.8 0.0153027
4 1 36 0.9 0.0180278
4 2 68 0.8 0.0073261
4 2 68 0.9 0.0084775

Replicates are multiple experimental runs with the same factor settings.

Use the number of replicates to estimate how many experimental runs to include in the design. If you enter a power, effect size, and number of center points, Minitab calculates the number of replicates. Because the numbers of replicates and center points are given in integer values, the actual power may be greater than your target value. If you increase the number of replicates, the power of your design also increases. You want enough replicates to achieve adequate power.

Because the replicates are integer values, the power values that you specify are target power values. The actual power values are for the number of replicates and the number of center points in the designed experiment. The actual power values are at least as large as the target power values.

In these results, Minitab calculates the number of replicates to reach the target power. The design that detects an effect of 2 with a power of 0.8 requires 1 replicate. To achieve a power of 0.9, the design requires 2 replicates. The actual power with 2 replicates is greater than 0.99. This actual power is the smallest power value that is greater than or equal to 0.9 and obtainable using an integer number of replicates. To detect the smaller effect of 0.9 with 0.8 power, the design requires 4 replicates. To detect the smaller effect of 0.9 with 0.9 power, the design requires 5 replicates.

Plackett-Burman Design
α = 0.05 Assumed standard deviation = 1.7
Method
Factors: 15 Design: 32
Center pts (total): 0

Results
Center Total Target
Points Effect Reps Runs Power Actual Power
0 2.0 1 32 0.8 0.877445
0 2.0 2 64 0.9 0.995974
0 0.9 4 128 0.8 0.843529
0 0.9 5 160 0.9 0.914018

An experimental run is a factor level combination at which you measure responses. The total number of runs is how many measurements of the response are in the design. Multiple executions of the same factor level combination are considered separate experimental runs and are called replicates.

Use the number of total runs to verify that the designed experiment is the right size for your resources. For a Plackett-Burman design, this formula gives the total number of runs:

Term | Description |
---|---|

n | Number of corner points per replicate |

r | Number of replicates |

cptotal | Number of center points |

In these results, a design with 12 corner points and 4 center points has 16 total runs. The number of runs in 2 replicates of the design is 12*2 + 4 = 28.

Plackett-Burman Design
α = 0.05 Assumed standard deviation = 1.8
Method
Factors: 8 Design: 12
Center pts (total): 4
Including a term for center points in model.

Results
Center Total
Points Effect Reps Runs Power
4 2.5 1 16 0.523009
4 2.5 2 28 0.895399

The power of a design is the probability that the design determines that an effect is statistically significant. The difference between the means of the response variable at the high and low levels of a factor is the effect size.

Use the power value to determine the ability of the design to detect an effect. If you enter a number of replicates, an effect size, and a number of center points, then Minitab calculates the power of the design. A power value of 0.9 is usually considered adequate. A value of 0.9 indicates that a design has a 90% chance to detect an effect of the size that you specify. Usually, the fewer the number of replicates, the lower the power. If a design has low power, you might fail to detect an effect and mistakenly conclude that none exists.

These results demonstrate how an increase in the number of experimental runs increases the power. For an effect size of 0.9, the power of the design is approximately 0.55 with 64 total runs. With 160 total runs the power of the design increases to about 0.91.

These results also demonstrate how an increase in the effect size increases the power. For a 64-run design, the power is approximately 0.55 for an effect size of 0.9. With an effect size of 1.5, the power increases to about 0.93.

Plackett-Burman Design
α = 0.05 Assumed standard deviation = 1.7
Method
Factors: 15 Design: 32
Center pts (total): 0

Results
Center Total
Points Effect Reps Runs Power
0 1.5 5 160 0.999830
0 1.5 2 64 0.932932
0 0.9 5 160 0.914018
0 0.9 2 64 0.545887

The power curve plots the power of the design versus the size of the effect. Effect refers to the difference between the mean response value at the high and low levels of a factor.

Use the power curve to assess the appropriate properties for your design.

The power curve represents the relationship between power and effect size, for every combination of center points and replicates. Each symbol on the power curve represents a calculated value based on the properties that you enter. For example, if you enter a number of replicates, a power value, and a number of center points, then Minitab calculates the corresponding effect size and displays the calculated value on the graph for the combination of replicates and center points. If you solve for replicates or center points, the plot also includes curves for other combinations of replicates and center points that are in the combinations that achieve the target power. The plot does not show curves for cases that do not have enough degrees of freedom to assess statistical significance.

Examine the values on the curve to determine the effect size that the experiment detects at a certain power value, number of corner points, and number of center points. A power value of 0.9 is usually considered adequate. However, some practitioners consider a power value of 0.8 to be adequate. If a design has low power, you might fail to detect an effect that is practically significant. Increasing the number of replicates increases the power of your design. You want enough experimental runs in your design to achieve adequate power. A design has more power to detect a larger effect than a smaller effect.