Select the method or formula of your choice.

Weighted least squares regression is a method for dealing with observations that have nonconstant variances. If the variances are not constant, observations with:

- large variances should be given relatively small weights
- small variances should be given relatively large weights

The usual choice of weights is the inverse of pure error variance in the response.

The formula for the estimated coefficients is as follows:

This is equivalent to minimizing the weighted SS Error.

Term | Description |
---|---|

X | design matrix |

X' | transpose of the design matrix |

W | an n x n matrix with the weights on the diagonal |

Y | vector of response values |

n | number of observations |

w_{i} | weight for the i^{th} observation |

y_{i} | response value for the i^{th} observation |

fitted value for the i^{th} observation |

Box-Cox transformation selects lambda values, as shown below, which minimize the residual sum of squares. The resulting transformation is *Y* ^{λ} when λ ≠ 0 and ln(*Y*) when λ = 0. When λ < 0, Minitab also multiplies the transformed response by −1 to maintain the order from the untransformed response.

Minitab searches for an optimal value between −2 and 2. Values that fall outside of this interval might not result in a better fit.

Here are some common transformations where *Y*′ is the transform of the data *Y*:

Lambda (λ) value | Transformation |
---|---|

λ = 2 | Y′ = Y ^{2} |

λ = 0.5 | Y′ = |

λ = 0 | Y′ = ln(Y ) |

λ = −0.5 | |

λ = −1 | Y′ = −1 / Y |

For a model with multiple predictors, the equation is:

*y* = *β*_{0} + *β*_{1}*x*_{1} + … + *β _{k}x_{k}* +

The fitted equation is:

In simple linear regression, which includes only one predictor, the model is:

*y*=*ß*_{0}+ *ß*_{1}*x*_{1}+*ε*

Using regression estimates *b*_{0} for *ß*_{0}, and *b*_{1} for *ß*_{1}, the fitted equation is:

Term | Description |
---|---|

y | response |

x_{k} | k^{th} term. Each term can be a single predictor, a polynomial term, or an interaction term. |

ß_{k} | k^{th} population regression coefficient |

ε | error term that follows a normal distribution with a mean of 0 |

b_{k} | estimate of k^{th} population regression coefficient |

fitted response |

The design matrix contains the predictors in a matrix (**X**) with *n* rows, where *n* is the number of observations. There is a column for each coefficient in the model.

Categorical predictors are coded using either 1, 0 or -1, 0, 1 coding. **X** does not include a column for the reference level of the factor.

To calculate the columns for an interaction term, multiply all of the corresponding values for the predictors in the interaction. For example, suppose the first observation has a value of 4 for predictor A and a value of 2 for predictor B. In the design matrix, the interaction between A and B is represented as 8 (4 x 2).

A *p* x *p* matrix, where *p* is the number of coefficients in the model. Multiplying **x'x** inverse by MSE produces the variance-covariance matrix of the coefficients. Minitab also uses the **x'x** inverse to calculate the regression coefficients and the hat matrix.

Let r_{ij} be the element in the current swept matrix associated with X_{i} and X_{j}.

Variables are entered or removed one at a time. X_{k} is eligible for entry if it is an independent variable not currently in the model with r_{kk} ≥ 1 (tolerance with a default of 0.0001) and also for each variable X_{j} that is currently in the model,

To remove highly correlated predictors from a regression equation, Minitab does the following steps:

- Minitab performs the SWEEP method on the correlation matrix, R, treating X
_{1}… X_{p}as if they are random variables. - For any continuous predictor, Minitab compares the element r
_{kk}with the tolerance; r_{kk}≥ tolerance, where k = 1 to p. - For each variable X
_{j }currently in the model, Minitab checks that (r_{jj}– r_{jk}* (r_{kj}/ r_{kk})) * tolerance ≤ 1.###### Note

Where r

_{kk}, r_{jk}, r_{jj}are the corresponding diagonal and off diagonal elements for X_{j}and X_{k}variables after k step SWEEP operations. - Otherwise, the predictor fails the test and is removed from the model.
###### Note

The default tolerance value is 8.8e–12.

You can use the TOLERANCE subcommand with the REGRESS session command to force Minitab to keep a predictor in the model that is highly correlated with a different predictor. However, lowering the tolerance can be dangerous, possibly producing numerically inaccurate results.