Methods in Nonlinear Regression

Select the method or formula of your choice.

Notation

The expectation function for observation n is denoted by:
Minitab considers the expectation function for all N observations as a vector-valued function of the parameters, as:
which is an N X 1 vector with elements .

The Jacobian of η is a N X P matrix with elements that are equal to the partial derivatives of the expectation function with respect to the parameters:

Let Vi = V(θi) be the Jacobian evaluated at θi, the parameter estimate after iteration i.

Then a linear approximation for η is:

which forms the basis for the Gauss-Newton method and for approximate inferences.

Let θ* denote the least-squares estimate.

Gauss-Newton

By default, Minitab uses the Gauss-Newton method to determine the least squares estimation. The method uses a linear approximation to the expectation function to iteratively improve an initial guess θ0 for θ, and then the method keeps improving the estimates until the relative offset falls below the prescribed tolerance1. That is, Minitab expands the expectation function f(xn,θ) in a first order Taylor series about θ0 as:
where
with p = 1, 2,..., p

Including all N cases

where V0 is the NxP derivative matrix with elements {vnp}. This is equivalent to approximating the residuals, z(θ) = y - η(θ), by:

where

and

Minitab calculates the Gauss increment δ0to minimize the approximate residual sum of squares , using:

and so: .

The point

should now be closer to y than η(θ0), and Minitab uses the value θ1 = θ0 + δ0 to perform another iteration by calculating new residuals z1 = y - η(θ1), a new derivative matrix V1, and a new increment. Minitab repeats this process until convergence, which is when the increment is so small that there is no useful change in the elements of the parameter vector.

Sometimes the Gauss-Newton increment produces an increase in the sum of squares. When this occurs, the linear approximation is still a close approximation to the actual surface for a sufficiently small region around η(θ0). To reduce the sum of squares, Minitab introduces a step factor λ, and calculates:

Minitab starts with λ = 1 and divides it in half until S(θ1) < S( θ0).
  1. Bates and Watts (1988). Nonlinear Regression Analysis and Its Applications. John Wiley & Sons, Inc.

Levenberg-Marquardt

When the columns in your gradient matrix V have collinearity, it can become singular, causing erratic behavior of Gauss-Newton iterations. To cope with the singularity, Minitab can modify the Gauss-Newton increment to the Levenberg compromise:
or the Marquardt compromise:
where k is a conditioning factor and D is a diagonal matrix with entries that are equal to the diagonal elements of VTV. The direction of δ(k) is intermediate between the direction of the Gauss-Newton increment (k → 0) and the direction of steepest descent:

.1

  1. Bates and Watts (1988). Nonlinear Regression Analysis and Its Applications. John Wiley & Sons, Inc.

Relative offset convergence criterion

By default, Minitab declares convergence when the relative offset is less than 1.0e-5. This assures that any inferences are not affected materially by the fact that the current parameter vector is less than 0.001% of the radius of the confidence region disk from the least squares point.1

1. Bates and Watts (1988). Nonlinear Regression Analysis and Its Applications. John Wiley & Sons, Inc.