Select the method or formula of your choice.

The expectation function for observation n is denoted by:
Minitab considers the expectation function for all N observations as a vector-valued function of the parameters, as:
which is an N X 1 vector with elements .
**V**^{i} = V(**θ**^{i}) be the Jacobian evaluated at **θ**^{i}, the parameter estimate after iteration i.

The Jacobian of **η** is a N X P matrix with elements that are equal to the partial derivatives of the expectation function with respect to the parameters:

Then a linear approximation for **η** is:

Let **θ*** denote the least-squares estimate.

By default, Minitab uses the Gauss-Newton method to determine the least squares estimation. The method uses a linear approximation to the expectation function to iteratively improve an initial guess **θ**^{0} for **θ**, and then the method keeps improving the estimates until the relative offset falls below the prescribed tolerance^{1}. That is, Minitab expands the expectation function f(x_{n},**θ**) in a first order Taylor series about **θ**^{0} as:
where
with *p* = 1, 2,..., *p*
**V**^{0} is the NxP derivative matrix with elements {*v*_{np}}. This is equivalent to approximating the residuals, **z(θ)** = **y** - **η**(**θ**), by:
*λ* = 1 and divides it in half until S(**θ**^{1}) < S( **θ**^{0}).

Including all N cases

wherewhere

andMinitab calculates the Gauss increment **δ**^{0}to minimize the approximate residual sum of squares , using:

The point

should now be closer to **y** than **η**(**θ**^{0}), and Minitab uses the value **θ**^{1} = **θ**^{0} + **δ**^{0} to perform another iteration by calculating new residuals **z**^{1} = **y** - **η**(**θ**^{1}), a new derivative matrix **V**^{1}, and a new increment. Minitab repeats this process until convergence, which is when the increment is so small that there is no useful change in the elements of the parameter vector.

Sometimes the Gauss-Newton increment produces an increase in the sum of squares. When this occurs, the linear approximation is still a close approximation to the actual surface for a sufficiently small region around **η**(**θ**^{0}). To reduce the sum of squares, Minitab introduces a step factor λ, and calculates:

- Bates and Watts (1988). Nonlinear Regression Analysis and Its Applications. John Wiley & Sons, Inc.

When the columns in your gradient matrix **V** have collinearity, it can become singular, causing erratic behavior of Gauss-Newton iterations. To cope with the singularity, Minitab can modify the Gauss-Newton increment to the Levenberg compromise:
or the Marquardt compromise:
where *k* is a conditioning factor and **D** is a diagonal matrix with entries that are equal to the diagonal elements of **V**^{T}**V**. The direction of **δ**(*k*) is intermediate between the direction of the Gauss-Newton increment (*k* → 0) and the direction of steepest descent:

.^{1}

- Bates and Watts (1988). Nonlinear Regression Analysis and Its Applications. John Wiley & Sons, Inc.

By default, Minitab declares convergence when the relative offset is less than 1.0e-5. This assures that any inferences are not affected materially by the fact that the current parameter vector is less than 0.001% of the radius of the confidence region disk from the least squares point.^{1}

1. Bates and Watts (1988). Nonlinear Regression Analysis and Its Applications. John Wiley & Sons, Inc.