## Iterative Display

### Introduction

The iterative display is a table of statistics describing the calculations in each iteration of a solver. The statistics depend on both the solver and the solver algorithm. The table appears in the MATLAB® Command Window when you run solvers with appropriate options. For more information about iterations, see Iterations and Function Counts.

Obtain the iterative display by using optimoptions with the Display option set to 'iter' or 'iter-detailed'. For example:

options = optimoptions(@fminunc,'Display','iter','Algorithm','quasi-newton');
[x fval exitflag output] = fminunc(@sin,0,options);
First-order
Iteration  Func-count     f(x)       Step-size     optimality
0           2              0                           1
1           4      -0.841471             1          0.54
2           8             -1      0.484797      0.000993
3          10             -1             1      5.62e-05
4          12             -1             1             0

Local minimum found.

Optimization completed because the size of the gradient is less than
the value of the optimality tolerance.

You can also obtain the iterative display by using the Optimization app. In the Display to command window section of the Options pane, select Level of display > iterative or iterative with detailed message.

The iterative display is available for all solvers except:

• lsqlin 'trust-region-reflective' algorithm

• lsqnonneg

This table lists some common headings of iterative display.

f(x) or Fval

Current objective function value; for fsolve, the square of the norm of the function value vector

First-order optimality

First-order optimality measure (see First-Order Optimality Measure)

Func-count or F-count

Number of function evaluations; see Iterations and Function Counts

Iteration or Iter

Iteration number; see Iterations and Function Counts

Norm of step

Size of the current step (size is the Euclidean norm, or 2-norm). For the 'trust-region' and 'trust-region-reflective' algorithms, when constraints exist, Norm of step is the norm of D*s. Here, s is the step and D is a diagonal scaling matrix described in the trust-region subproblem section of the algorithm description.

The tables in this section describe headings of the iterative display whose meaning is specific to the optimization function you are using.

#### fgoalattain, fmincon, fminimax, and fseminf

This table describes the headings specific to fgoalattain, fmincon, fminimax, and fseminf.

fgoalattain, fmincon, fminimax, or fseminf HeadingInformation Displayed

Attainment factor

Value of the attainment factor for fgoalattain

CG-iterations

Number of conjugate gradient iterations taken in the current iteration (see Preconditioned Conjugate Gradient Method)

Directional derivative

Gradient of the objective function along the search direction

Feasibility

Maximum constraint violation, where satisfied inequality constraints count as 0

Line search steplength

Multiplicative factor that scales the search direction (see Equation 29)

Max constraint

Maximum violation among all constraints, both internally constructed and user-provided; can be negative when no constraint is binding

Objective value

Objective function value of the nonlinear programming reformulation of the minimax problem for fminimax

Procedure

Hessian update procedures:

• Infeasible start point

• Hessian not updated

• Hessian modified

• Hessian modified twice

QP subproblem procedures:

• dependent — The solver detected and removed dependent (redundant) equality constraints.

• Infeasible — The QP subproblem with linearized constraints is infeasible.

• Overly constrained — The QP subproblem with linearized constraints is infeasible.

• Unbounded — The QP subproblem is feasible with large negative curvature.

• Ill-posed — The QP subproblem search direction is too small.

• Unreliable — The QP subproblem seems to be poorly conditioned.

Steplength

Multiplicative factor that scales the search direction (see Equation 29)

#### fminbnd and fzero

This table describes the headings specific to fminbnd and fzero.

Procedure

Procedures for fminbnd:

• initial

• golden (golden section search)

• parabolic (parabolic interpolation)

Procedures for fzero:

• initial (initial point)

• search (search for an interval containing a zero)

• bisection

• interpolation (linear interpolation or inverse quadratic interpolation)

x

Current point for the algorithm

#### fminsearch

This table describes the headings specific to fminsearch.

min f(x)

Minimum function value in the current simplex

Procedure

Simplex procedure at the current iteration. Procedures include:

• initial simplex

• expand

• reflect

• shrink

• contract inside

• contract outside

For details, see fminsearch Algorithm.

#### fminunc

This table describes the headings specific to fminunc.

CG-iterations

Number of conjugate gradient iterations taken in the current iteration (see Preconditioned Conjugate Gradient Method)

Line search steplength

Multiplicative factor that scales the search direction (see Equation 11)

The fminunc 'quasi-newton' algorithm can issue a skipped update message to the right of the First-order optimality column. This message means that fminunc did not update its Hessian estimate, because the resulting matrix would not have been positive definite. The message usually indicates that the objective function is not smooth at the current point.

#### fsolve

This table describes the headings specific to fsolve.

Directional derivative

Gradient of the function along the search direction

Lambda

λk value defined in Levenberg-Marquardt Method

Residual

Residual (sum of squares) of the function

#### intlinprog

This table describes the headings specific to intlinprog.

nodes explored

Cumulative number of explored nodes

total time (s)

Time in seconds since intlinprog started

num int solution

Number of integer feasible points found

integer fval

Objective function value of the best integer feasible point found. This value is an upper bound for the final objective function value

relative gap (%)

$\frac{100\left(b-a\right)}{|b|+1},$

where

• b is the objective function value of the best integer feasible point.

• a is the best lower bound on the objective function value.

### Note

Although you specify RelativeGapTolerance as a decimal number, the iterative display and output.relativegap report the gap as a percentage, meaning 100 times the measured relative gap. If the exit message refers to the relative gap, this value is the measured relative gap, not a percentage.

#### linprog

This table describes the headings specific to linprog. Each algorithm has its own iterative display.

Primal Infeas A*x-b or Primal Infeas

Primal infeasibility, a measure of the constraint violations, which should be zero at a solution.

For definitions, see Predictor-Corrector ('interior-point') or Main Algorithm ('interior-point-legacy') or Dual-Simplex Algorithm.

Dual Infeas A'*y+z-w-f or Dual Infeas

Dual infeasibility, a measure of the derivative of the Lagrangian, which should be zero at a solution.

For the definition of the Lagrangian, see Predictor-Corrector. For the definition of dual infeasibility, see Predictor-Corrector ('interior-point') or Main Algorithm ('interior-point-legacy') or Dual-Simplex Algorithm.

Upper Bounds {x}+s-ub

Upper bound feasibility. {x} means those x with finite upper bounds. This value is the ru residual in Interior-Point-Legacy Linear Programming.

Duality Gap x'*z+s'*w

Duality gap (see Interior-Point-Legacy Linear Programming) between the primal objective and the dual objective. s and w appear in this equation only if the problem has finite upper bounds.

Total Rel Error

Total relative error, described at the end of Main Algorithm

Complementarity

A measure of the Lagrange multipliers times distance from the bounds, which should be zero at a solution. See the rc variable in Stopping Conditions.

Time

Time in seconds that linprog has been running

#### lsqlin

The lsqlin 'interior-point' iterative display is inherited from the quadprog iterative display. The relationship between these functions is explained in Linear Least Squares: Interior-Point or Active-Set. For iterative display details, see quadprog.

#### lsqnonlin and lsqcurvefit

This table describes the headings specific to lsqnonlin and lsqcurvefit.

Directional derivative

Gradient of the function along the search direction

Lambda

λk value defined in Levenberg-Marquardt Method

Resnorm

Value of the squared 2-norm of the residual at x

Residual

Residual vector of the function

This table describes the headings specific to quadprog. Only the 'interior-point-convex' algorithm has the iterative display.

Primal Infeas

Primal infeasibility, defined as max( norm(Aeq*x - beq, inf), abs(min(0, min(A*x-b))) )

Dual Infeas

Dual infeasibility, defined as norm(H*x + f - A*lambda_ineqlin - Aeq*lambda_eqlin, inf)

Complementarity

A measure of the maximum absolute value of the Lagrange multipliers of inactive inequalities, which should be zero at a solution. This quantity is g in Infeasibility Detection.