Solve nonlinear least-squares (nonlinear data-fitting) problems (2024)

Solve nonlinear least-squares (nonlinear data-fitting) problems

collapse all in page

Syntax

x = lsqnonlin(fun,x0)

x = lsqnonlin(fun,x0,lb,ub)

x = lsqnonlin(fun,x0,lb,ub,A,b,Aeq,beq)

x = lsqnonlin(fun,x0,lb,ub,A,b,Aeq,beq,nonlcon)

x = lsqnonlin(fun,x0,lb,ub,options)

x = lsqnonlin(fun,x0,lb,ub,A,b,Aeq,beq,nonlcon,options)

x = lsqnonlin(problem)

[x,resnorm]= lsqnonlin(___)

[x,resnorm,residual,exitflag,output]= lsqnonlin(___)

[x,resnorm,residual,exitflag,output,lambda,jacobian]= lsqnonlin(___)

Description

Nonlinear least-squares solver

Solves nonlinear least-squares curve fitting problems of the form

minxf(x)22=minx(f1(x)2+f2(x)2+...+fn(x)2)

subject to the constraints

lbxxubAxbAeqx=beqc(x)0ceq(x)=0.

x, lb, and ub can be vectors or matrices; see Matrix Arguments.

Do not specify the objective function as the scalar value f(x)22 (the sum of squares). lsqnonlin requires the objective function to be the vector-valued function

f(x)=[f1(x)f2(x)fn(x)].

example

x = lsqnonlin(fun,x0) starts at the point x0 and finds a minimum of the sum of squares of the functions described in fun. The function fun should return a vector (or array) of values and not the sum of squares of the values. (The algorithm implicitly computes the sum of squares of the components of fun(x).)

Note

Passing Extra Parameters explains how to pass extra parameters to the vector function fun(x), if necessary.

example

x = lsqnonlin(fun,x0,lb,ub) defines a set of lower and upper bounds on the design variables in x, so that the solution is always in the range lbxub. You can fix the solution component x(i) by specifying lb(i)=ub(i).

Note

If the specified input bounds for a problem are inconsistent, the output x is x0 and the outputs resnorm and residual are [].

Components of x0 that violate the bounds lb≤x≤ub are reset to the interior of the box defined by the bounds. Components that respect the bounds are not changed.

example

x = lsqnonlin(fun,x0,lb,ub,A,b,Aeq,beq) constrains the solution to satisfy the linear constraints

A xb

Aeq x = beq.

example

x = lsqnonlin(fun,x0,lb,ub,A,b,Aeq,beq,nonlcon) constrain the solution to satisfy the nonlinear constraints in the nonlcon(x) function. nonlcon returns two outputs, c and ceq. The solver attempts to satisfy the constraints

c ≤ 0

ceq = 0.

example

x = lsqnonlin(fun,x0,lb,ub,options) and x = lsqnonlin(fun,x0,lb,ub,A,b,Aeq,beq,nonlcon,options) minimizes with the optimization options specified in options. Use optimoptions to set these options. Pass empty matrices for lb and ub and for other input arguments if the arguments do not exist.

x = lsqnonlin(problem) finds the minimum for problem, a structure described in problem.

example

[x,resnorm]= lsqnonlin(___), for any input arguments, returns the value of the squared 2-norm of the residual at x: sum(fun(x).^2).

example

[x,resnorm,residual,exitflag,output]= lsqnonlin(___) additionally returns the value of the residual fun(x) at the solution x, a value exitflag that describes the exit condition, and a structure output that contains information about the optimization process.

[x,resnorm,residual,exitflag,output,lambda,jacobian]= lsqnonlin(___) additionally returns a structure lambda whose fields contain the Lagrange multipliers at the solution x, and the Jacobian of fun at the solution x.

Examples

collapse all

Fit a Simple Exponential

Open Live Script

Fit a simple exponential decay curve to data.

Generate data from an exponential decay model plus noise. The model is

y=exp(-1.3t)+ε,

with t ranging from 0 through 3, and ε normally distributed noise with mean 0 and standard deviation 0.05.

rng default % for reproducibilityd = linspace(0,3);y = exp(-1.3*d) + 0.05*randn(size(d));

The problem is: given the data (d, y), find the exponential decay rate that best fits the data.

Create an anonymous function that takes a value of the exponential decay rate r and returns a vector of differences from the model with that decay rate and the data.

fun = @(r)exp(-d*r)-y;

Find the value of the optimal decay rate. Arbitrarily choose an initial guess x0 = 4.

x0 = 4;x = lsqnonlin(fun,x0)
Local minimum possible.lsqnonlin stopped because the final change in the sum of squares relative to its initial value is less than the value of the function tolerance.
x = 1.2645

Plot the data and the best-fitting exponential curve.

plot(d,y,'ko',d,exp(-x*d),'b-')legend('Data','Best fit')xlabel('t')ylabel('exp(-tx)')

Solve nonlinear least-squares (nonlinear data-fitting) problems (1)

Fit a Problem with Bound Constraints

Open Live Script

Find the best-fitting model when some of the fitting parameters have bounds.

Find a centering b and scaling a that best fit the function

aexp(-t)exp(-exp(-(t-b)))

to the standard normal density,

12πexp(-t2/2).

Create a vector t of data points, and the corresponding normal density at those points.

t = linspace(-4,4);y = 1/sqrt(2*pi)*exp(-t.^2/2);

Create a function that evaluates the difference between the centered and scaled function from the normal y, with x(1) as the scaling a and x(2) as the centering b.

fun = @(x)x(1)*exp(-t).*exp(-exp(-(t-x(2)))) - y;

Find the optimal fit starting from x0 = [1/2,0], with the scaling a between 1/2 and 3/2, and the centering b between -1 and 3.

lb = [1/2,-1];ub = [3/2,3];x0 = [1/2,0];x = lsqnonlin(fun,x0,lb,ub)
Local minimum possible.lsqnonlin stopped because the final change in the sum of squares relative to its initial value is less than the value of the function tolerance.

Plot the two functions to see the quality of the fit.

plot(t,y,'r-',t,fun(x)+y,'b-')xlabel('t')legend('Normal density','Fitted function')

Solve nonlinear least-squares (nonlinear data-fitting) problems (2)

Least Squares with Linear Constraint

Open Live Script

Consider the following objective function, a sum of squares:

k=110(2+2k+exp(kx1)+2exp(2kx22))2.

The code for this objective function appears as the myfun function at the end of this example.

Minimize this function subject to the linear constraint x1x22. Write this constraint as x1-x220.

A = [1 -1/2];b = 0;

Impose the bounds x10, x20, x12, and x24.

lb = [0 0];ub = [2 4];

Start the optimization process from the point x0 = [0.3 0.4].

x0 = [0.3 0.4];

The problem has no linear equality constraints.

Aeq = [];beq = [];

Run the optimization.

x = lsqnonlin(@myfun,x0,lb,ub,A,b,Aeq,beq)
Local minimum found that satisfies the constraints.Optimization completed because the objective function is non-decreasing in feasible directions, to within the value of the optimality tolerance,and constraints are satisfied to within the value of the constraint tolerance.
x = 1×2 0.1695 0.3389
function F = myfun(x)k = 1:10;F = 2 + 2*k - exp(k*x(1)) - 2*exp(2*k*(x(2)^2));end

Nonlinear Least Squares with Nonlinear Constraint

Open Live Script

Consider the following objective function, a sum of squares:

k=110(2+2k+exp(kx1)+2exp(2kx22))2.

The code for this objective function appears as the myfun function at the end of this example.

Minimize this function subject to the nonlinear constraint sin(x1)cos(x2). The code for this nonlinear constraint function appears as the nlcon function at the end of this example.

Impose the bounds x10, x20, x12, and x24.

lb = [0 0];ub = [2 4];

Start the optimization process from the point x0 = [0.3 0.4].

x0 = [0.3 0.4];

The problem has no linear constraints.

A = [];b = [];Aeq = [];beq = [];

Run the optimization.

x = lsqnonlin(@myfun,x0,lb,ub,A,b,Aeq,beq,@nlcon)
Local minimum possible. Constraints satisfied.fmincon stopped because the size of the current step is less thanthe value of the step size tolerance and constraints are satisfied to within the value of the constraint tolerance.
x = 1×2 0.2133 0.3266
function F = myfun(x)k = 1:10;F = 2 + 2*k - exp(k*x(1)) - 2*exp(2*k*(x(2)^2));endfunction [c,ceq] = nlcon(x)ceq = [];c = sin(x(1)) - cos(x(2));end

Nonlinear Least Squares with Nondefault Options

Open Live Script

Compare the results of a data-fitting problem when using different lsqnonlin algorithms.

Suppose that you have observation time data xdata and observed response data ydata, and you want to find parameters x(1) and x(2) to fit a model of the form

ydata=x(1)exp(x(2)xdata).

Input the observation times and responses.

xdata = ... [0.9 1.5 13.8 19.8 24.1 28.2 35.2 60.3 74.6 81.3];ydata = ... [455.2 428.6 124.1 67.3 43.2 28.1 13.1 -0.4 -1.3 -1.5];

Create a simple exponential decay model. The model computes a vector of differences between predicted values and observed values.

fun = @(x)x(1)*exp(x(2)*xdata)-ydata;

Fit the model using the starting point x0 = [100,-1]. First, use the default 'trust-region-reflective' algorithm.

x0 = [100,-1];options = optimoptions(@lsqnonlin,'Algorithm','trust-region-reflective');x = lsqnonlin(fun,x0,[],[],options)
Local minimum possible.lsqnonlin stopped because the final change in the sum of squares relative to its initial value is less than the value of the function tolerance.
x = 1×2 498.8309 -0.1013

See if there is any difference using the 'levenberg-marquardt' algorithm.

options.Algorithm = 'levenberg-marquardt';x = lsqnonlin(fun,x0,[],[],options)
Local minimum possible.lsqnonlin stopped because the relative size of the current step is less thanthe value of the step size tolerance.
x = 1×2 498.8309 -0.1013

The two algorithms found the same solution. Plot the solution and the data.

plot(xdata,ydata,'ko')hold ontlist = linspace(xdata(1),xdata(end));plot(tlist,x(1)*exp(x(2)*tlist),'b-')xlabel xdataylabel ydatatitle('Exponential Fit to Data')legend('Data','Exponential Fit')hold off

Solve nonlinear least-squares (nonlinear data-fitting) problems (3)

Nonlinear Least Squares Solution and Residual Norm

Open Live Script

Find the x that minimizes

k=110(2+2k-ekx1-ekx2)2,

and find the value of the minimal sum of squares.

Because lsqnonlin assumes that the sum of squares is not explicitly formed in the user-defined function, the function passed to lsqnonlin should instead compute the vector-valued function

Fk(x)=2+2k-ekx1-ekx2,

for k=1 to 10 (that is, F should have 10 components).

The myfun function, which computes the 10-component vector F, appears at the end of this example.

Find the minimizing point and the minimum value, starting at the point x0 = [0.3,0.4].

x0 = [0.3,0.4];[x,resnorm] = lsqnonlin(@myfun,x0)
Local minimum possible.lsqnonlin stopped because the size of the current step is less thanthe value of the step size tolerance.
x = 1×2 0.2578 0.2578
resnorm = 124.3622

The resnorm output is the squared residual norm, or the sum of squares of the function values.

The following function computes the vector-valued objective function.

function F = myfun(x)k = 1:10;F = 2 + 2*k-exp(k*x(1))-exp(k*x(2));end

Examine the Solution Process

Open Live Script

Examine the solution process both as it occurs (by setting the Display option to 'iter') and afterward (by examining the output structure).

Suppose that you have observation time data xdata and observed response data ydata, and you want to find parameters x(1) and x(2) to fit a model of the form

ydata=x(1)exp(x(2)xdata).

Input the observation times and responses.

xdata = ... [0.9 1.5 13.8 19.8 24.1 28.2 35.2 60.3 74.6 81.3];ydata = ... [455.2 428.6 124.1 67.3 43.2 28.1 13.1 -0.4 -1.3 -1.5];

Create a simple exponential decay model. The model computes a vector of differences between predicted values and observed values.

fun = @(x)x(1)*exp(x(2)*xdata)-ydata;

Fit the model using the starting point x0 = [100,-1]. Examine the solution process by setting the Display option to 'iter'. Obtain an output structure to obtain more information about the solution process.

x0 = [100,-1];options = optimoptions('lsqnonlin','Display','iter');[x,resnorm,residual,exitflag,output] = lsqnonlin(fun,x0,[],[],options);
 Norm of First-order Iteration Func-count Resnorm step optimality 0 3 359677 2.88e+04Objective function returned Inf; trying a new point... 1 6 359677 11.6976 2.88e+04 2 9 321395 0.5 4.97e+04 3 12 321395 1 4.97e+04 4 15 292253 0.25 7.06e+04 5 18 292253 0.5 7.06e+04 6 21 270350 0.125 1.15e+05 7 24 270350 0.25 1.15e+05 8 27 252777 0.0625 1.63e+05 9 30 252777 0.125 1.63e+05 10 33 243877 0.03125 7.48e+04 11 36 243660 0.0625 8.7e+04 12 39 243276 0.0625 2e+04 13 42 243174 0.0625 1.14e+04 14 45 242999 0.125 5.1e+03 15 48 242661 0.25 2.04e+03 16 51 241987 0.5 1.91e+03 17 54 240643 1 1.04e+03 18 57 237971 2 3.36e+03 19 60 232686 4 6.04e+03 20 63 222354 8 1.2e+04 21 66 202592 16 2.25e+04 22 69 166443 32 4.05e+04 23 72 106320 64 6.68e+04 24 75 28704.7 128 8.31e+04 25 78 89.7947 140.674 2.22e+04 26 81 9.57381 2.02599 684 27 84 9.50489 0.0619927 2.27 28 87 9.50489 0.000462261 0.0114 Local minimum possible.lsqnonlin stopped because the final change in the sum of squares relative to its initial value is less than the value of the function tolerance.

Examine the output structure to obtain more information about the solution process.

output
output = struct with fields: firstorderopt: 0.0114 iterations: 28 funcCount: 87 cgiterations: 0 algorithm: 'trust-region-reflective' stepsize: 4.6226e-04 message: 'Local minimum possible....' bestfeasible: [] constrviolation: []

For comparison, set the Algorithm option to 'levenberg-marquardt'.

options.Algorithm = 'levenberg-marquardt';[x,resnorm,residual,exitflag,output] = lsqnonlin(fun,x0,[],[],options);
 First-order Norm of Iteration Func-count Resnorm optimality Lambda step 0 3 359677 2.88e+04 0.01Objective function returned Inf; trying a new point... 1 13 340761 3.91e+04 100000 0.280777 2 16 304661 5.97e+04 10000 0.373146 3 21 297292 6.55e+04 1e+06 0.0589933 4 24 288240 7.57e+04 100000 0.0645444 5 28 275407 1.01e+05 1e+06 0.0741266 6 31 249954 1.62e+05 100000 0.094571 7 36 245896 1.35e+05 1e+07 0.0133606 8 39 243846 7.26e+04 1e+06 0.0094431 9 42 243568 5.66e+04 100000 0.0082162 10 45 243424 1.61e+04 10000 0.00777935 11 48 243322 8.8e+03 1000 0.0673933 12 51 242408 5.1e+03 100 0.675209 13 54 233628 1.05e+04 10 6.59804 14 57 169089 8.51e+04 1 54.6992 15 60 30814.7 1.54e+05 0.1 196.939 16 63 147.496 8e+03 0.01 129.795 17 66 9.51503 117 0.001 9.96069 18 69 9.50489 0.0714 0.0001 0.080486 19 72 9.50489 5.23e-05 1e-05 5.07043e-05Local minimum possible.lsqnonlin stopped because the relative size of the current step is less thanthe value of the step size tolerance.

The 'levenberg-marquardt' converged with fewer iterations, but almost as many function evaluations:

output
output = struct with fields: iterations: 19 funcCount: 72 stepsize: 5.0704e-05 cgiterations: [] firstorderopt: 5.2319e-05 algorithm: 'levenberg-marquardt' message: 'Local minimum possible....' bestfeasible: [] constrviolation: []

Input Arguments

collapse all

funFunction whose sum of squares is minimized
function handle | name of function

Function whose sum of squares is minimized, specified as a function handle or the name of a function. For the 'interior-point' algorithm, fun must be a function handle. fun is a function that accepts an array x and returns an array F, the objective function evaluated at x.

Note

The sum of squares should not be formed explicitly. Instead, your function should return a vector of function values. See Examples.

The function fun can be specified as a function handle to a file:

x = lsqnonlin(@myfun,x0)

where myfun is a MATLAB® function such as

function F = myfun(x)F = ... % Compute function values at x

fun can also be a function handle for an anonymous function.

x = lsqnonlin(@(x)sin(x.*x),x0);

lsqnonlin passes x to your objective function in the shape of the x0 argument. For example, if x0 is a 5-by-3 array, then lsqnonlin passes x to fun as a 5-by-3 array.

If the Jacobian can also be computed and the 'SpecifyObjectiveGradient' option is true, set by

options = optimoptions('lsqnonlin','SpecifyObjectiveGradient',true)

then the function fun must return a second output argument with the Jacobian value J (a matrix) at x. By checking the value of nargout, the function can avoid computing J when fun is called with only one output argument (in the case where the optimization algorithm only needs the value of F but not J).

function [F,J] = myfun(x)F = ... % Objective function values at xif nargout > 1 % Two output arguments J = ... % Jacobian of the function evaluated at xend

If fun returns an array of m components and x has n elements, where n is the number of elements of x0, the Jacobian J is an m-by-n matrix where J(i,j) is the partial derivative of F(i) with respect to x(j). (The Jacobian J is the transpose of the gradient of F.)

Example: @(x)cos(x).*exp(-x)

Data Types: char | function_handle | string

nonlconNonlinear constraints
function handle

Nonlinear constraints, specified as a function handle. nonlcon is a function that accepts a vector or array x and returns two arrays, c(x) and ceq(x).

  • c(x) is the array of nonlinear inequality constraints at x. lsqnonlin attempts to satisfy

    c(x) <= 0 for all entries of c.(1)
  • ceq(x) is the array of nonlinear equality constraints at x. lsqnonlin attempts to satisfy

    ceq(x) = 0 for all entries of ceq.(2)

For example,

x = lsqnonlin(@myfun,x0,lb,ub,A,b,Aeq,beq,@mycon,options)

where mycon is a MATLAB function such as

function [c,ceq] = mycon(x)c = ... % Compute nonlinear inequalities at x.ceq = ... % Compute nonlinear equalities at x.

If the Jacobians (derivatives) of the constraints can also be computed and the SpecifyConstraintGradient option is true, as set by

options = optimoptions('lsqnonlin','SpecifyConstraintGradient',true)

then nonlcon must also return, in the third and fourth output arguments, GC, the Jacobian of c(x), and GCeq, the Jacobian of ceq(x). The Jacobian G(x) of a vector function F(x) is

Gi,j(x)=Fi(x)xj.

GC and GCeq can be sparse or dense. If GC or GCeq is large, with relatively few nonzero entries, save running time and memory in the 'interior-point' algorithm by representing them as sparse matrices. For more information, see Nonlinear Constraints.

Data Types: function_handle

optionsOptimization options
output of optimoptions | structure as optimset returns

Optimization options, specified as the output of optimoptions or a structure as optimset returns.

Some options apply to all algorithms, and others are relevant for particular algorithms. See Optimization Options Reference for detailed information.

Some options are absent from the optimoptions display. These options appear in italics in the following table. For details, see View Optimization Options.

All Algorithms

Algorithm

Choose between 'trust-region-reflective' (default), 'levenberg-marquardt', and 'interior-point'.

The Algorithm option specifies a preference for which algorithm to use. It is only a preference, because certain conditions must be met to use each algorithm. For the trust-region-reflective algorithm, the number of elements of F returned by fun must be at least as many as the length of x.

The 'interior-point' algorithm is the only algorithm that can solve problems with linear or nonlinear constraints. If you include these constraints in your problem and do not specify an algorithm, the solver automatically switches to the 'interior-point' algorithm. The 'interior-point' algorithm calls a modified version of the fmincon 'interior-point' algorithm.

For more information on choosing the algorithm, see Choosing the Algorithm.

CheckGradients

Compare user-supplied derivatives (gradients of objective or constraints) to finite-differencing derivatives. Choices are false (default) or true.

For optimset, the name is DerivativeCheck and the values are 'on' or 'off'. See Current and Legacy Option Names.

The CheckGradients option will be removed in a future release. To check derivatives, use the checkGradients function.

Diagnostics

Display diagnostic informationabout the function to be minimized or solved. Choices are 'off' (default)or 'on'.

DiffMaxChange

Maximum change in variables forfinite-difference gradients (a positive scalar). The default is Inf.

DiffMinChange

Minimum change in variables forfinite-difference gradients (a positive scalar). The default is 0.

Display

Level of display (see Iterative Display):

  • 'off' or 'none' displaysno output.

  • 'iter' displays output at eachiteration, and gives the default exit message.

  • 'iter-detailed' displays outputat each iteration, and gives the technical exit message.

  • 'final' (default) displays justthe final output, and gives the default exit message.

  • 'final-detailed' displays justthe final output, and gives the technical exit message.

FiniteDifferenceStepSize

Scalar or vector step size factor for finite differences. When you set FiniteDifferenceStepSize to a vector v, the forward finite differences delta are

delta = v.*sign′(x).*max(abs(x),TypicalX);

where sign′(x) = sign(x) except sign′(0) = 1. Central finite differences are

delta = v.*max(abs(x),TypicalX);

A scalar FiniteDifferenceStepSize expands to a vector. The default is sqrt(eps) for forward finite differences, and eps^(1/3) for central finite differences.

For optimset, the name is FinDiffRelStep. See Current and Legacy Option Names.

FiniteDifferenceType

Finite differences, used to estimate gradients,are either 'forward' (default), or 'central' (centered). 'central' takestwice as many function evaluations, but should be more accurate.

Thealgorithm is careful to obey bounds when estimating both types offinite differences. So, for example, it could take a backward, ratherthan a forward, difference to avoid evaluating at a point outsidebounds.

For optimset, the name is FinDiffType. See Current and Legacy Option Names.

FunctionTolerance

Termination tolerance on the function value, a nonnegative scalar. The default is 1e-6. See Tolerances and Stopping Criteria.

For optimset, the name is TolFun. See Current and Legacy Option Names.

FunValCheck

Check whether function values arevalid. 'on' displays an error when the functionreturns a value that is complex, Inf,or NaN. The default 'off' displaysno error.

MaxFunctionEvaluations

Maximum number of function evaluations allowed, a nonnegative integer. The default is 100*numberOfVariables for the 'trust-region-reflective' algorithm, 200*numberOfVariables for the 'levenberg-marquardt' algorithm, and 3000 for the 'interior-point' algorithm. See Tolerances and Stopping Criteria and Iterations and Function Counts.

For optimset, the name is MaxFunEvals. See Current and Legacy Option Names.

MaxIterations

Maximum number of iterations allowed, a nonnegative integer. The default is 400 for the 'trust-region-reflective' and 'levenberg-marquardt' algorithms, and 1000 for the 'interior-point' algorithm. See Tolerances and Stopping Criteria and Iterations and Function Counts.

For optimset, the name is MaxIter. See Current and Legacy Option Names.

OptimalityTolerance

Termination tolerance on the first-order optimality (a nonnegative scalar). The default is 1e-6. See First-Order Optimality Measure.

Internally,the 'levenberg-marquardt' algorithm uses an optimalitytolerance (stopping criterion) of 1e-4 times FunctionTolerance anddoes not use OptimalityTolerance.

For optimset, the name is TolFun. See Current and Legacy Option Names.

OutputFcn

Specify one or more user-defined functions that an optimization function calls at each iteration. Pass a function handle or a cell array of function handles. The default is none ([]). See Output Function and Plot Function Syntax.

PlotFcn

Plots various measures of progress while the algorithm executes; select from predefined plots or write your own. Pass a name, a function handle, or a cell array of names or function handles. For custom plot functions, pass function handles. The default is none ([]):

  • 'optimplotx' plots the current point.

  • 'optimplotfunccount' plots the function count.

  • 'optimplotfval' plots the function value.

  • 'optimplotresnorm' plots the norm of the residuals.

  • 'optimplotstepsize' plots the step size.

  • 'optimplotfirstorderopt' plots the first-order optimality measure.

Custom plot functions use the same syntax as output functions. See Output Functions for Optimization Toolbox and Output Function and Plot Function Syntax.

For optimset, the name is PlotFcns. See Current and Legacy Option Names.

SpecifyObjectiveGradient

If false (default),the solver approximates the Jacobian using finite differences. If true,the solver uses a user-defined Jacobian (defined in fun),or Jacobian information (when using JacobMult),for the objective function.

For optimset, the name is Jacobian, and the values are 'on' or 'off'. See Current and Legacy Option Names.

StepTolerance

Termination tolerance on x, a nonnegative scalar. The default is 1e-6 for the 'trust-region-reflective' and 'levenberg-marquardt' algorithms, and 1e-10 for the 'interior-point' algorithm. See Tolerances and Stopping Criteria.

For optimset, the name is TolX. See Current and Legacy Option Names.

TypicalX

Typical x values.The number of elements in TypicalX is equal tothe number of elements in x0, the starting point.The default value is ones(numberofvariables,1).The solver uses TypicalX for scaling finite differencesfor gradient estimation.

UseParallel

When true, thesolver estimates gradients in parallel. Disable by setting to thedefault, false. See Parallel Computing.

Trust-Region-Reflective Algorithm
JacobianMultiplyFcn

Jacobian multiply function, specified as a function handle. For large-scale structured problems, this function computes the Jacobian matrix product J*Y, J'*Y, or J'*(J*Y) without actually forming J. For lsqnonlin the function is of the form

W = jmfun(Jinfo,Y,flag)

where Jinfo contains the data that helps to compute J*Y (or J'*Y, or J'*(J*Y)). For lsqcurvefit the function is of the form

W = jmfun(Jinfo,Y,flag,xdata)

where xdata is the data passed in the xdata argument.

The data Jinfo is the second argument returned by the objective function fun:

[F,Jinfo] = fun(x)% or [F,Jinfo] = fun(x,xdata)

lsqnonlin passes the data Jinfo, Y, flag, and, for lsqcurvefit, xdata, and your function jmfun computes a result as specified next. Y is a matrix whose size depends on the value of flag. Let m specify the number of components of the objective function fun, and let n specify the number of problem variables in x. The Jacobian is of size m-by-n as described in fun. The jmfun function returns one of these results:

  • If flag == 0 then W=J'*(J*Y) and Y has size n-by-2.

  • If flag > 0 then W = J*Y and Y has size n-by-1.

  • If flag < 0 then W = J'*Y and Y has size m-by-1.

In each case, J is not formed explicitly. The solver uses Jinfo to compute the multiplications. See Passing Extra Parameters for information on how to supply values for any additional parameters jmfun needs.

Note

'SpecifyObjectiveGradient' must be set to true for the solver to pass Jinfo from fun to jmfun.

See Minimization with Dense Structured Hessian, Linear Equalities and Jacobian Multiply Function with Linear Least Squares for similar examples.

For optimset, the name is JacobMult. See Current and Legacy Option Names.

JacobPattern

Sparsity pattern of the Jacobianfor finite differencing. Set JacobPattern(i,j) = 1 when fun(i) dependson x(j). Otherwise, set JacobPattern(i,j)= 0. In other words, JacobPattern(i,j) = 1 whenyou can have ∂fun(i)/∂x(j)≠0.

Use JacobPattern whenit is inconvenient to compute the Jacobian matrix J in fun,though you can determine (say, by inspection) when fun(i) dependson x(j). The solver can approximate J viasparse finite differences when you give JacobPattern.

Ifthe structure is unknown, do not set JacobPattern.The default behavior is as if JacobPattern is adense matrix of ones. Then the solver computes a full finite-differenceapproximation in each iteration. This can be expensive for large problems,so it is usually better to determine the sparsity structure.

MaxPCGIter

Maximum number of PCG (preconditionedconjugate gradient) iterations, a positive scalar. The default is max(1,numberOfVariables/2).For more information, see Large Scale Nonlinear Least Squares.

PrecondBandWidth

Upper bandwidth of preconditionerfor PCG, a nonnegative integer. The default PrecondBandWidth is Inf,which means a direct factorization (Cholesky) is used rather thanthe conjugate gradients (CG). The direct factorization is computationallymore expensive than CG, but produces a better quality step towardsthe solution. Set PrecondBandWidth to 0 fordiagonal preconditioning (upper bandwidth of 0). For some problems,an intermediate bandwidth reduces the number of PCG iterations.

SubproblemAlgorithm

Determines how the iteration stepis calculated. The default, 'factorization', takesa slower but more accurate step than 'cg'. See Trust-Region-Reflective Least Squares.

TolPCG

Termination tolerance on the PCGiteration, a positive scalar. The default is 0.1.

Levenberg-Marquardt Algorithm
InitDamping

Initial value of the Levenberg-Marquardt parameter, apositive scalar. Default is 1e-2. For details,see Levenberg-Marquardt Method.

ScaleProblem

'jacobian' can sometimes improve theconvergence of a poorly scaled problem; the default is 'none'.

Interior-Point Algorithm
BarrierParamUpdate

Specifies how fmincon updates the barrier parameter (see fmincon Interior Point Algorithm). The options are:

  • 'monotone' (default)

  • 'predictor-corrector'

This option can affect the speed and convergence of the solver, but the effect is not easy to predict.

ConstraintTolerance

Tolerance on the constraint violation, a nonnegative scalar. The default is 1e-6. See Tolerances and Stopping Criteria.

For optimset, the name is TolCon. See Current and Legacy Option Names.

InitBarrierParam

Initial barrier value, a positive scalar. Sometimes it might help to try a value above the default 0.1, especially if the objective or constraint functions are large.

SpecifyConstraintGradient

Gradient for nonlinear constraint functions defined by the user. When set to the default, false, lsqnonlin estimates gradients of the nonlinear constraints by finite differences. When set to true, lsqnonlin expects the constraint function to have four outputs, as described in nonlcon.

For optimset, the name is GradConstr and the values are 'on' or 'off'. See Current and Legacy Option Names.

SubproblemAlgorithm

Determines how the iteration step is calculated. The default, 'factorization', is usually faster than 'cg' (conjugate gradient), though 'cg' might be faster for large problems with dense Hessians. See fmincon Interior Point Algorithm.

For optimset, the values are 'cg' and 'ldl-factorization'. See Current and Legacy Option Names.

Example: options = optimoptions('lsqnonlin','FiniteDifferenceType','central')

problemProblem structure
structure

Problem structure, specified as a structure with the following fields:

Field NameEntry

objective

Objective function

x0

Initial point for x

Aineq

Matrix for linear inequality constraints

bineq

Vector for linear inequality constraints

Aeq

Matrix for linear equality constraints

beq

Vector for linear equality constraints
lbVector of lower bounds
ubVector of upper bounds

nonlcon

Nonlinear constraint function

solver

'lsqnonlin'

options

Options created with optimoptions

You must supply at least the objective, x0, solver, and options fields in the problem structure.

Data Types: struct

Output Arguments

collapse all

resnorm — Squared norm of the residual
nonnegative real

Squared norm of the residual, returned as a nonnegative real. resnorm is the squared 2-norm of the residual at x: sum(fun(x).^2).

residual — Value of objective function at solution
array

Value of objective function at solution, returned as an array. In general, residual = fun(x).

Limitations

  • The trust-region-reflective algorithm does not solve underdetermined systems; it requires that the number of equations, i.e., the row dimension of F, be at least as great as the number of variables. In the underdetermined case, lsqnonlin uses the Levenberg-Marquardt algorithm.

  • lsqnonlin can solve complex-valued problems directly. Note that constraints do not make sense for complex values, because complex numbers are not well-ordered; asking whether one complex value is greater or less than another complex value is nonsensical. For a complex problem with bound constraints, split the variables into real and imaginary parts. Do not use the 'interior-point' algorithm with complex data. See Fit a Model to Complex-Valued Data.

  • The preconditioner computation used in the preconditioned conjugate gradient part of the trust-region-reflective method forms JTJ (where J is the Jacobian matrix) before computing the preconditioner. Therefore, a row of J with many nonzeros, which results in a nearly dense product JTJ, can lead to a costly solution process for large problems.

  • If components of x have no upper (or lower) bounds, lsqnonlin prefers that the corresponding components of ub (or lb) be set to inf (or -inf for lower bounds) as opposed to an arbitrary but very large positive (or negative for lower bounds) number.

You can use the trust-region reflective algorithm in lsqnonlin, lsqcurvefit,and fsolve with small- to medium-scaleproblems without computing the Jacobian in fun orproviding the Jacobian sparsity pattern. (This also applies to using fmincon or fminunc withoutcomputing the Hessian or supplying the Hessian sparsity pattern.)How small is small- to medium-scale? No absolute answer is available,as it depends on the amount of virtual memory in your computer systemconfiguration.

Suppose your problem has m equations and n unknowns.If the command J=sparse(ones(m,n)) causesan Out of memory error on your machine,then this is certainly too large a problem. If it does not resultin an error, the problem might still be too large. You can find outonly by running it and seeing if MATLAB runs within the amountof virtual memory available on your system.

Algorithms

The Levenberg-Marquardt and trust-region-reflective methods are based on the nonlinear least-squares algorithms also used in fsolve.

  • The default trust-region-reflective algorithm is a subspace trust-region method and is based on the interior-reflective Newton method described in [1] and [2]. Each iteration involves the approximate solution of a large linear system using the method of preconditioned conjugate gradients (PCG). See Trust-Region-Reflective Least Squares.

  • The Levenberg-Marquardt method is described in references [4], [5], and [6]. See Levenberg-Marquardt Method.

The 'interior-point' algorithm uses the fmincon 'interior-point' algorithm with some modifications. For details, see Modified fmincon Algorithm for Constrained Least Squares.

Alternative Functionality

App

The Optimize Live Editor task provides a visual interface for lsqnonlin.

References

[1] Coleman, T.F. and Y. Li. “An Interior, Trust Region Approach for Nonlinear Minimization Subject to Bounds.” SIAM Journal on Optimization, Vol. 6, 1996, pp. 418–445.

[2] Coleman, T.F. and Y. Li. “On the Convergence of Reflective Newton Methods for Large-Scale Nonlinear Minimization Subject to Bounds.” Mathematical Programming, Vol. 67, Number 2, 1994, pp. 189–224.

[3] Dennis, J. E. Jr. “Nonlinear Least-Squares.” State of the Art in Numerical Analysis, ed. D. Jacobs, Academic Press, pp. 269–312.

[4] Levenberg, K. “A Method for the Solution of Certain Problems in Least-Squares.” Quarterly Applied Mathematics 2, 1944, pp. 164–168.

[5] Marquardt, D. “An Algorithm for Least-squares Estimation of Nonlinear Parameters.” SIAM Journal Applied Mathematics, Vol. 11, 1963, pp. 431–441.

[6] Moré, J. J. “The Levenberg-Marquardt Algorithm: Implementation and Theory.” Numerical Analysis, ed. G. A. Watson, Lecture Notes in Mathematics 630, Springer Verlag, 1977, pp. 105–116.

[7] Moré, J. J., B. S. Garbow, and K. E. Hillstrom. User Guide for MINPACK 1. Argonne National Laboratory, Rept. ANL–80–74, 1980.

[8] Powell, M. J. D. “A Fortran Subroutine for Solving Systems of Nonlinear Algebraic Equations.” Numerical Methods for Nonlinear Algebraic Equations, P. Rabinowitz, ed., Ch.7, 1970.

Extended Capabilities

Version History

Introduced before R2006a

expand all

The CheckGradients option will be removed in a future release. To check the first derivatives of objective functions or nonlinear constraint functions, use the checkGradients function.

See Also

fsolve | lsqcurvefit | optimoptions | Optimize

Topics

  • Nonlinear Least Squares (Curve Fitting)
  • Solver-Based Optimization Problem Setup
  • Least-Squares (Model Fitting) Algorithms

MATLAB Command

You clicked a link that corresponds to this MATLAB command:

 

Run the command by entering it in the MATLAB Command Window. Web browsers do not support MATLAB commands.

Solve nonlinear least-squares (nonlinear data-fitting) problems (4)

Select a Web Site

Choose a web site to get translated content where available and see local events and offers. Based on your location, we recommend that you select: .

You can also select a web site from the following list:

Americas

  • América Latina (Español)
  • Canada (English)
  • United States (English)

Europe

  • Belgium (English)
  • Denmark (English)
  • Deutschland (Deutsch)
  • España (Español)
  • Finland (English)
  • France (Français)
  • Ireland (English)
  • Italia (Italiano)
  • Luxembourg (English)
  • Netherlands (English)
  • Norway (English)
  • Österreich (Deutsch)
  • Portugal (English)
  • Sweden (English)
  • Switzerland
    • Deutsch
    • English
    • Français
  • United Kingdom (English)

Asia Pacific

  • Australia (English)
  • India (English)
  • New Zealand (English)
  • 中国
  • 日本 (日本語)
  • 한국 (한국어)

Contact your local office

Solve nonlinear least-squares (nonlinear data-fitting) problems (2024)

References

Top Articles
What Does DND Mean On Snapchat?
What Is The X Next To A Snapchat Name? The Weird Icon, Explained
ALLEN 'CHAINSAW' KESSLER | LAS VEGAS, NV, United States
Smsgt Promotion List
Costco store locator - Florida
What Auto Parts Stores Are Open
Jack Daniels Pop Tarts
Urology Match Spreadsheet
Spacebar Counter - Space Bar Clicker Test
Irissangel
1977 Elo Hit Wsj Crossword
Shooters Lube Discount Code
Lookwhogotbusted New Braunfels
Palmetto E Services
Metoprolol  (Kapspargo Sprinkle, Lopressor) | Davis’s Drug Guide
Chrysler, Dodge, Jeep & Ram Vehicles in Houston, MS | Eaton CDJR
Kira Kener 2022
Boys golf: Back-nine surge clinches Ottumwa Invite title for DC-G
Elemental Showtimes Near Regal White Oak
Razwan Ali ⇒ Free Company Director Check
Wolf Of Wall Street Tamil Dubbed Full Movie
2011 Traverse Belt Diagram
Los Garroberros Menu
REGULAMENTUL CAMPANIEI "Extra Smart Week" valabil in perioada 12-18 septembrie 2024
Pokio.io
Bodek And Rhodes Catalog
Acbl Homeport
Lg Un9000 Review Rtings
Culver's Flavor Of The Day Taylor Dr
Kris Carolla Obituary
Theatervoorstellingen in Roosendaal, het complete aanbod.
Dramacool Love In Contract
Notifications & Circulars
$200K In Rupees
Riverry Studio
Roseberrys Obituaries
Concord Mills Mall Store Directory
Betty Rea Ice Cream
Lindy Kendra Scott Obituary
Bn9 Weather Radar
123Movies Scary Movie 2
2022 Basketball 247
Ucla Football 247
Thekat103.7
Walgreens Bunce Rd
Bitlife Tyrone's
Lakeridge Funeral Home Lubbock Texas Obituaries
Stuckey Furniture
German police arrest 25 suspects in plot to overthrow state – DW – 12/07/2022
new hampshire real estate - craigslist
Youtube Verify On Payment Methods Page
Barotrauma Game Wiki
Latest Posts
Article information

Author: Duncan Muller

Last Updated:

Views: 6359

Rating: 4.9 / 5 (59 voted)

Reviews: 82% of readers found this page helpful

Author information

Name: Duncan Muller

Birthday: 1997-01-13

Address: Apt. 505 914 Phillip Crossroad, O'Konborough, NV 62411

Phone: +8555305800947

Job: Construction Agent

Hobby: Shopping, Table tennis, Snowboarding, Rafting, Motor sports, Homebrewing, Taxidermy

Introduction: My name is Duncan Muller, I am a enchanting, good, gentle, modern, tasty, nice, elegant person who loves writing and wants to share my knowledge and understanding with you.