The error variance ratio is the error variance for the response divided by the error variance for the predictor.

Use the error variance ratio to describe how different the errors for the response and predictor are.

Ratio | Interpretation |
---|---|

δ > 1 | The response measurements are more uncertain than the predictor measurements. |

δ = 1 | The response measurements and the predictor measurements are equally uncertain. |

δ < 1 | The response measurements are more certain than the predictor measurements. |

Use the regression equation to describe the relationship between the response and the terms in the model. The regression equation is an algebraic representation of the regression line. The regression equation for the linear model takes the following form: Y= b_{0} + b_{1}x_{1}. In the regression equation, Y is the response variable, b_{0} is the constant or intercept, b_{1} is the estimated coefficient for the linear term (also known as the slope of the line), and x_{1} is the value of the term.

In orthogonal regression, the value of X_{1} and the value of Y both represent uncertain values. The true values of the predictor variable and response variable are unknown.

You often use orthogonal regression in clinical chemistry or a laboratory to determine whether two instruments or methods provide comparable measurements. When the measurements are comparable, the coefficient for the constant is 0 and the coefficient for the linear term is 1. Use the confidence intervals in the coefficients table to decide whether statistical evidence exists against either value.

A regression coefficient describes the size and direction of the relationship between a predictor and the response variable. Coefficients are the numbers by which the values of the term are multiplied in a regression equation.

The coefficient of the term represents the change in the mean response for one-unit change in that term, while the other terms in the model are held constant. The sign of the coefficient indicates the direction of the relationship between the term and the response. If the coefficient is negative, as the term increases, the mean value of the response decreases. If the coefficient is positive, as the term increases, the mean value of the response increases.

You often use orthogonal regression in clinical chemistry or a laboratory to determine whether two instruments or methods provide comparable measurements. When the measurements are comparable, the coefficient for the constant is 0 and the coefficient for the linear term is 1. Use the confidence intervals in the coefficients table to decide whether statistical evidence exists against either value.

The standard error of the coefficient estimates the variability between coefficient estimates that you would obtain if you took samples from the same population again and again. The calculation assumes that the sample size and the coefficients to estimate would remain the same if you sampled again and again.

Use the standard error of the coefficient to measure the precision of the estimate of the coefficient. The smaller the standard error, the more precise the estimate.

Dividing the coefficient by its standard error calculates a Z-value. If the p-value associated with this Z-statistic is less than your significance level, you conclude that the coefficient is statistically significant.

The Z-value is a test statistic for tests that measure the ratio between the coefficient and its standard error.

Minitab uses the Z-value to calculate the p-value, which you use to make a decision about the statistical significance of the terms.

You often use orthogonal regression in clinical chemistry or a laboratory to determine whether two instruments or methods provide comparable measurements. Use the confidence intervals for the coefficients of the constant and the linear term to determine whether the measurements from the two methods differ.

- For the constant term, a low p-value provides evidence that the constant is not zero. If the constant is not zero, usually, you conclude that the measurements from the two methods have a significant difference or bias.
- For the linear term, a low p-value provides evidence that the linear term is not zero. If the linear term is not zero, then an association exists between the measurements. This p-value does not provide enough information to conclude that the measurements are comparable. Use the confidence interval for the coefficient to decide whether the measurements are comparable.

The p-value is a probability that measures the evidence against the null hypothesis. Lower probabilities provide stronger evidence against the null hypothesis.

You often use orthogonal regression in clinical chemistry or a laboratory to determine whether two instruments or methods provide comparable measurements. Use the confidence intervals for the coefficients of the constant and the linear term to determine whether the measurements from the two methods differ.

- For the constant term, a low p-value provides evidence that the constant is not zero. If the constant is not zero, usually, you conclude that the measurements from the two methods have a significant difference or bias.
- For the linear term, a low p-value provides evidence that the linear term is not zero. If the linear term is not zero, then an association exists between the measurements. This p-value does not provide enough information to conclude that the measurements are comparable. Use the confidence interval for the coefficient to decide whether the measurements are comparable.

These confidence intervals (CI) are ranges of values that are likely to contain the true value of the coefficient for each term in the model.

Because samples are random, two samples from a population are unlikely to yield identical confidence intervals. However, if you take many random samples, a certain percentage of the resulting confidence intervals contain the unknown population parameter. The percentage of these confidence intervals that contain the parameter is the confidence level of the interval.

The confidence interval is composed of the following two parts:

- Point estimate
- This single value estimates a population parameter by using your sample data. The confidence interval is centered around the point estimate.
- Margin of error
- The margin of error defines the width of the confidence interval and is determined by the observed variability in the sample, the sample size, and the confidence level. To calculate the upper limit of the confidence interval, the margin of error is added to the point estimate. To calculate the lower limit of the confidence interval, the margin of error is subtracted from the point estimate.

You often use orthogonal regression in clinical chemistry or a laboratory to determine whether two instruments or methods provide comparable measurements. If the confidence interval for the constant term contains zero and the interval for the linear term contains 1, then you can usually conclude that the measurements from the two instruments are comparable.

In these results, the confidence interval for the constant term is approximately (−3, 4). Because the interval contains 0, this part of the analysis does not provide evidence that the measurements from the two instruments differ.

The confidence interval for the linear term is approximately (0.97, 1.02). Because the interval contains 1, this part of the analysis does not provide evidence that the measurements from the two instruments differ.

Error Variance Ratio (New/Current): 0.9

Regression Equation

New = 0.644 + 0.995 Current

New = 0.644 + 0.995 Current

Predictor | Coef | SE Coef | Z | P | Approx 95% CI |
---|---|---|---|---|---|

Constant | 0.64441 | 1.74470 | 0.3694 | 0.712 | (-2.77513, 4.06395) |

Current | 0.99542 | 0.01415 | 70.3461 | 0.000 | (0.96769, 1.02315) |

Variable | Variance |
---|---|

New | 1.07856 |

Current | 1.19840 |

The error variances describe the amount of uncertainty about the values of the predictor and the response.

Use the error variances for each variable to understand the variation in the measurements of the response variable and the predictor variable. Larger error variances indicate that the measurements are more uncertain. The error variance for the predictor variable and the error variance ratio determine the error variance for the response variable.