Main Content

Unconstrained Minimization Using fminunc

This example shows how to use fminunc to solve the nonlinear minimization problem


To solve this two-dimensional problem, write a function that returns f(x). Then, invoke the unconstrained minimization routine fminunc starting from the initial point x0 = [-1,1].

The helper function objfun at the end of this example calculates f(x).

To find the minimum of f(x), set the initial point and call fminunc.

x0 = [-1,1];
[x,fval,exitflag,output] = fminunc(@objfun,x0);
Local minimum found.

Optimization completed because the size of the gradient is less than
the value of the optimality tolerance.

View the results, including the first-order optimality measure in the output structure.

    0.5000   -1.0000

The exitflag output indicates whether the algorithm converges. exitflag = 1 means fminunc finds a local minimum.

The output structure gives more details about the optimization. For fminunc, the structure includes:

  • output.iterations, the number of iterations

  • output.funcCount, the number of function evaluations

  • output.stepsize, the final step-size

  • output.firstorderopt, a measure of first-order optimality (which, in this unconstrained case, is the infinity norm of the gradient at the solution)

  • output.algorithm, the type of algorithm used

  • output.message, the reason the algorithm stopped

Helper Function

This code creates the objfun helper function.

function f = objfun(x)
f = exp(x(1)) * (4*x(1)^2 + 2*x(2)^2 + 4*x(1)*x(2) + 2*x(2) + 1);

Related Examples

More About