Select the method or formula of your choice.

Weighted least squares regression is a method for dealing with observations that have nonconstant variances. If the variances are not constant, observations with:

- large variances should be given relatively small weights
- small variances should be given relatively large weights

The usual choice of weights is the inverse of pure error variance in the response.

The formula for the estimated coefficients is as follows:

This is equivalent to minimizing the weighted SS Error.

Term | Description |
---|---|

X | design matrix |

X' | transpose of the design matrix |

W | an n x n matrix with the weights on the diagonal |

Y | vector of response values |

n | number of observations |

w_{i} | weight for the i^{th} observation |

y_{i} | response value for the i^{th} observation |

fitted value for the i^{th} observation |

Box-Cox transformation selects lambda values, as shown below, which minimize the residual sum of squares. The resulting transformation is *Y* ^{λ} when λ ≠ 0 and ln(*Y*) when λ = 0. When λ < 0, Minitab also multiplies the transformed response by −1 to maintain the order from the untransformed response.

Minitab searches for an optimal value between −2 and 2. Values that fall outside of this interval might not result in a better fit.

Here are some common transformations where *Y*′ is the transform of the data *Y*:

Lambda (λ) value | Transformation |
---|---|

λ = 2 | Y′ = Y ^{2} |

λ = 0.5 | Y′ = |

λ = 0 | Y′ = ln(Y ) |

λ = −0.5 | |

λ = −1 | Y′ = −1 / Y |

For a model with multiple predictors, the equation is:

*y* = *β*_{0} + *β*_{1}*x*_{1} + … + *β _{k}x_{k}* +

The fitted equation is:

In simple linear regression, which includes only one predictor, the model is:

*y*=*ß*_{0}+ *ß*_{1}*x*_{1}+*ε*

Using regression estimates *b*_{0} for *ß*_{0}, and *b*_{1} for *ß*_{1}, the fitted equation is:

Term | Description |
---|---|

y | response |

x_{k} | k^{th} term. Each term can be a single predictor, a polynomial term, or an interaction term. |

ß_{k} | k^{th} population regression coefficient |

ε | error term that follows a normal distribution with a mean of 0 |

b_{k} | estimate of k^{th} population regression coefficient |

fitted response |

The design matrix contains the predictors in a matrix (**X**) with *n* rows, where *n* is the number of observations. There is a column for each coefficient in the model.

Categorical predictors are coded using either 1, 0 or -1, 0, 1 coding. **X** does not include a column for the reference level of the factor.

To calculate the columns for an interaction term, multiply all of the corresponding values for the predictors in the interaction. For example, suppose the first observation has a value of 4 for predictor A and a value of 2 for predictor B. In the design matrix, the interaction between A and B is represented as 8 (4 x 2).

A *p* x *p* matrix, where *p* is the number of coefficients in the model. Multiplying **x'x** inverse by MSE produces the variance-covariance matrix of the coefficients. Minitab also uses the **x'x** inverse to calculate the regression coefficients and the hat matrix.