The carryover statistic measures the effect of one treatment on the next treatment. For example, suppose the reference treatment has a strong effect and the test treatment has a weak effect. If the washout period is not long enough, the residual effects of the reference treatment in Period 1 can cause the effects of the test treatment in Period 2 to appear stronger than they actually are.
Compare the p-value for the carryover effect with the significance level (denoted as alpha or α). An α of 0.05 is common. If the p-value is less than α, the carryover effect is statistically significant. In that case, the equivalence test results may be biased.
If either the carryover effect or the period effect is statistically significant, the results of the equivalence test may not be reliable. Also, the treatment effect may be confounded with the carryover effect and/or the period effect, making the estimates uncertain. When you use a 2x2 crossover design, you should carefully plan your study to avoid carryover effects and period effects before you collect and analyze the data.
The treatment statistic measures the difference between the effects of the test treatment and of the reference treatment. In most studies, the treatment effect is the effect of interest.
Compare the p-value for the treatment effect with the significance level (denoted as alpha or α). An α of 0.05 is common. If the p-value is less than α, the treatment effect is statistically significant.
If either the carryover effect or the period effect is statistically significant, the results of the equivalence test may not be reliable. Also, the treatment effect may be confounded with the carryover effect and/or the period effect, making the estimates uncertain. When you use a 2x2 crossover design, you should carefully plan your study to avoid carryover effects and period effects before you collect and analyze the data.
The period statistic measures the difference between the response in Period 1 and in Period 2. For example, if you measure blood pressure as the response, you might find that the response decreases during Period 2 simply because participants are more acclimated to the testing environment and procedures. The participants' acclimation could thus result in a period effect.
Compare the p-value for the period effect with the significance level (denoted as alpha or α). An α of 0.05 is common. If the p-value is less than α, the period effect is statistically significant. In that case, the results of the equivalence test may be biased.
If either the carryover effect or the period effect is statistically significant, the results of the equivalence test may not be reliable. Also, the treatment effect may be confounded with the carryover effect and/or the period effect, making the estimates uncertain. When you use a 2x2 crossover design, you should carefully plan your study to avoid carryover effects and period effects before you collect and analyze the data.
The standard error of each effect estimates the variability between the sample effect that you would obtain if you took repeated samples from the same population.
Use the standard error of the effect to assess the precision of the estimate of each effect in relation to random sampling variability. Usually, the smaller the standard error, the more precise the estimate of the effect and the narrower its confidence interval.
Dividing each effect by its standard error calculates a t-value for the effect. The lower the standard error is in relation to the size of the effect, the higher the absolute value of the t-value. If the p-value associated with this t-value is less than your alpha level, you conclude that the effect is statistically significant. For more information, go to the section on P-value for the effects.
The degrees of freedom (DF) indicate the amount of information that is available in your data to estimate the values of the unknown parameters, and to calculate the variability of these estimates.
Minitab uses the degrees of freedom to calculate the test statistic. Degrees of freedom are affected by the sample size. Increasing your sample size provides more information about the population, which increases the degrees of freedom.
The t-value is a test statistic that measures the magnitude of an effect, relative to the variability of the samples (the standard error).
You can use the t-value to determine whether to reject the null hypothesis. However, most people use the p-value or the confidence interval because they are easier to interpret.
Dividing each effect by its standard error calculates a t-value for the effect. The smaller the size of the standard error in relation to the size of the effect, the greater the absolute value of the t-value, and the stronger the evidence against the null hypothesis.
The t-value for each effect is used to calculate its corresponding p-value. If the p-value associated with this t-value is less than your significance level, you conclude that the effect is statistically significant. For more information, see the section on P-value for the effects.
The p-value is a probability that measures the evidence against the null hypothesis. Lower probabilities provide stronger evidence against the null hypothesis.
For an equivalence test for a 2x2 crossover design, Minitab calculates p-values for the carryover effect, the period effect, and the treatment effect.
Use the p-value for each effects to determine whether the effect is statistically significant. Compare each p-value with the significance level (denoted as alpha or α). Usually, an α of 0.05 works well.
If either the carryover effect or the period effect is statistically significant, the results of the equivalence test may not be reliable. The treatment effect may be confounded with the period effect and/or the carryover effect. When you use a 2x2 crossover design, you should carefully plan your study to avoid carryover effects and period effects before you collect and analyze the data.
If the carryover effect and period effect are not statistically significant, determine whether the treatment effect is statistically significant. Usually the treatment effect is the effect of interest.
A treatment effect that is statistically significant does not indicate that you cannot claim equivalence. The difference between the treatment means may still be within your equivalence limits. Use the results on the equivalence plot to determine whether you can claim equivalence. For more information, go to Graphs for Equivalence Test for a 2x2 Crossover Design and click "Equivalence plot".
The confidence interval for equivalence provides a range of likely values for each effect based on your sample data.
For each effect, use the confidence interval and the p-value to determine whether the effect is statistically significant.
If either the carryover effect or the period effect is statistically significant, the results of the equivalence test may not be reliable. Also, the treatment effect may be confounded with the carryover effect and/or the period effect, making the estimates uncertain. When you use a 2x2 crossover design, you should carefully plan your study to avoid carryover effects and period effects before you collect and analyze the data.