Main Content

linprog

Solve linear programming problems

Description

Linear programming solver

Finds the minimum of a problem specified by

minxfTx such that {Axb,Aeqx=beq,lbxub.

f, x, b, beq, lb, and ub are vectors, and A and Aeq are matrices.

Note

linprog applies only to the solver-based approach. For a discussion of the two optimization approaches, see First Choose Problem-Based or Solver-Based Approach.

x = linprog(f,A,b) solves min f'*x such that A*x  b.

example

x = linprog(f,A,b,Aeq,beq) includes equality constraints Aeq*x = beq. Set A = [] and b = [] if no inequalities exist.

example

x = linprog(f,A,b,Aeq,beq,lb,ub) defines a set of lower and upper bounds on the design variables, x, so that the solution is always in the range lb ≤ x ≤ ub. Set Aeq = [] and beq = [] if no equalities exist.

Note

If the specified input bounds for a problem are inconsistent, the output fval is [].

example

x = linprog(f,A,b,Aeq,beq,lb,ub,options) minimizes with the optimization options specified by options. Use optimoptions to set these options.

example

x = linprog(problem) finds the minimum for problem, a structure described in problem.

You can import a problem structure from an MPS file using mpsread. You can also create a problem structure from an OptimizationProblem object by using prob2struct.

example

[x,fval] = linprog(___), for any input arguments, returns the value of the objective function fun at the solution x: fval = f'*x.

example

[x,fval,exitflag,output] = linprog(___) additionally returns a value exitflag that describes the exit condition, and a structure output that contains information about the optimization process.

example

[x,fval,exitflag,output,lambda] = linprog(___) additionally returns a structure lambda whose fields contain the Lagrange multipliers at the solution x.

example

Examples

collapse all

Solve a simple linear program defined by linear inequalities.

For this example, use these linear inequality constraints:

x(1)+x(2)2

x(1)+x(2)/41

x(1)-x(2)2

-x(1)/4-x(2)1

-x(1)-x(2)-1

-x(1)+x(2)2.

A = [1 1
    1 1/4
    1 -1
    -1/4 -1
    -1 -1
    -1 1];

b = [2 1 2 1 -1 2];

Use the objective function -x(1)-x(2)/3.

f = [-1 -1/3];

Solve the linear program.

x = linprog(f,A,b)
Optimal solution found.
x = 2×1

    0.6667
    1.3333

Solve a simple linear program defined by linear inequalities and linear equalities.

For this example, use these linear inequality constraints:

x(1)+x(2)2

x(1)+x(2)/41

x(1)-x(2)2

-x(1)/4-x(2)1

-x(1)-x(2)-1

-x(1)+x(2)2.

A = [1 1
    1 1/4
    1 -1
    -1/4 -1
    -1 -1
    -1 1];

b = [2 1 2 1 -1 2];

Use the linear equality constraint x(1)+x(2)/4=1/2.

Aeq = [1 1/4];
beq = 1/2;

Use the objective function -x(1)-x(2)/3.

f = [-1 -1/3];

Solve the linear program.

x = linprog(f,A,b,Aeq,beq)
Optimal solution found.
x = 2×1

     0
     2

Solve a simple linear program with linear inequalities, linear equalities, and bounds.

For this example, use these linear inequality constraints:

x(1)+x(2)2

x(1)+x(2)/41

x(1)-x(2)2

-x(1)/4-x(2)1

-x(1)-x(2)-1

-x(1)+x(2)2.

A = [1 1
    1 1/4
    1 -1
    -1/4 -1
    -1 -1
    -1 1];

b = [2 1 2 1 -1 2];

Use the linear equality constraint x(1)+x(2)/4=1/2.

Aeq = [1 1/4];
beq = 1/2;

Set these bounds:

-1x(1)1.5

-0.5x(2)1.25.

lb = [-1,-0.5];
ub = [1.5,1.25];

Use the objective function -x(1)-x(2)/3.

f = [-1 -1/3];

Solve the linear program.

x = linprog(f,A,b,Aeq,beq,lb,ub)
Optimal solution found.
x = 2×1

    0.1875
    1.2500

Solve a linear program using the 'interior-point' algorithm.

For this example, use these linear inequality constraints:

x(1)+x(2)2

x(1)+x(2)/41

x(1)-x(2)2

-x(1)/4-x(2)1

-x(1)-x(2)-1

-x(1)+x(2)2.

A = [1 1
    1 1/4
    1 -1
    -1/4 -1
    -1 -1
    -1 1];

b = [2 1 2 1 -1 2];

Use the linear equality constraint x(1)+x(2)/4=1/2.

Aeq = [1 1/4];
beq = 1/2;

Set these bounds:

-1x(1)1.5

-0.5x(2)1.25.

lb = [-1,-0.5];
ub = [1.5,1.25];

Use the objective function -x(1)-x(2)/3.

f = [-1 -1/3];

Set options to use the 'interior-point' algorithm.

options = optimoptions('linprog','Algorithm','interior-point');

Solve the linear program using the 'interior-point' algorithm.

x = linprog(f,A,b,Aeq,beq,lb,ub,options)
Minimum found that satisfies the constraints.

Optimization completed because the objective function is non-decreasing in feasible directions, to within the function tolerance, and constraints are satisfied to within the constraint tolerance.
x = 2×1

    0.1875
    1.2500

This example shows how to set up a problem using the problem-based approach and then solve it using the solver-based approach. The problem is

maxx(x+y/3)subjectto{x+y2x+y/41x-y2x/4+y-1x+y1-x+y2x+y/4=1/2-1x1.5-1/2y1.25

Create an OptimizationProblem object named prob to represent this problem.

x = optimvar('x','LowerBound',-1,'UpperBound',1.5);
y = optimvar('y','LowerBound',-1/2,'UpperBound',1.25);
prob = optimproblem('Objective',x + y/3,'ObjectiveSense','max');
prob.Constraints.c1 = x + y <= 2;
prob.Constraints.c2 = x + y/4 <= 1;
prob.Constraints.c3 = x - y <= 2;
prob.Constraints.c4 = x/4 + y >= -1;
prob.Constraints.c5 = x + y >= 1;
prob.Constraints.c6 = -x + y <= 2;
prob.Constraints.c7 = x + y/4 == 1/2;

Convert the problem object to a problem structure.

problem = prob2struct(prob);

Solve the resulting problem structure.

[sol,fval,exitflag,output] = linprog(problem)
Optimal solution found.
sol = 2×1

    0.1875
    1.2500

fval = 
-0.6042
exitflag = 
1
output = struct with fields:
         iterations: 0
          algorithm: 'dual-simplex-highs'
    constrviolation: 0
            message: 'Optimal solution found.'
      firstorderopt: 0

The returned fval is negative, even though the solution components are positive. Internally, prob2struct turns the maximization problem into a minimization problem of the negative of the objective function. See Maximizing an Objective.

Which component of sol corresponds to which optimization variable? Examine the Variables property of prob.

prob.Variables
ans = struct with fields:
    x: [1x1 optim.problemdef.OptimizationVariable]
    y: [1x1 optim.problemdef.OptimizationVariable]

As you might expect, sol(1) corresponds to x, and sol(2) corresponds to y. See Algorithms.

Calculate the solution and objective function value for a simple linear program.

The inequality constraints are

x(1)+x(2)2

x(1)+x(2)/41

x(1)-x(2)2

-x(1)/4-x(2)1

-x(1)-x(2)-1

-x(1)+x(2)2.

A = [1 1
    1 1/4
    1 -1
    -1/4 -1
    -1 -1
    -1 1];

b = [2 1 2 1 -1 2];

The objective function is -x(1)-x(2)/3.

f = [-1 -1/3];

Solve the problem and return the objective function value.

[x,fval] = linprog(f,A,b)
Optimal solution found.
x = 2×1

    0.6667
    1.3333

fval = 
-1.1111

Obtain the exit flag and output structure to better understand the solution process and quality.

For this example, use these linear inequality constraints:

x(1)+x(2)2

x(1)+x(2)/41

x(1)-x(2)2

-x(1)/4-x(2)1

-x(1)-x(2)-1

-x(1)+x(2)2.

A = [1 1
    1 1/4
    1 -1
    -1/4 -1
    -1 -1
    -1 1];

b = [2 1 2 1 -1 2];

Use the linear equality constraint x(1)+x(2)/4=1/2.

Aeq = [1 1/4];
beq = 1/2;

Set these bounds:

-1x(1)1.5

-0.5x(2)1.25.

lb = [-1,-0.5];
ub = [1.5,1.25];

Use the objective function -x(1)-x(2)/3.

f = [-1 -1/3];

Set options to use the 'dual-simplex' algorithm.

options = optimoptions('linprog','Algorithm','dual-simplex');

Solve the linear program and request the function value, exit flag, and output structure.

[x,fval,exitflag,output] = linprog(f,A,b,Aeq,beq,lb,ub,options)
Optimal solution found.
x = 2×1

    0.1875
    1.2500

fval = 
-0.6042
exitflag = 
1
output = struct with fields:
         iterations: 0
          algorithm: 'dual-simplex-highs'
    constrviolation: 0
            message: 'Optimal solution found.'
      firstorderopt: 0

  • fval, the objective function value, is larger than Return the Objective Function Value, because there are more constraints.

  • exitflag = 1 indicates that the solution is reliable.

  • output.iterations = 0 indicates that linprog found the solution during presolve, and did not have to iterate at all.

Solve a simple linear program and examine the solution and the Lagrange multipliers.

Use the objective function

f(x)=-5x1-4x2-6x3.

f = [-5; -4; -6];

Use the linear inequality constraints

x1-x2+x320

3x1+2x2+4x342

3x1+2x230.

A =  [1 -1  1
      3  2  4
      3  2  0];
b = [20;42;30];

Constrain all variables to be positive:

x10

x20

x30.

lb = zeros(3,1);

Set Aeq and beq to [], indicating that there are no linear equality constraints.

Aeq = [];
beq = [];

Call linprog, obtaining the Lagrange multipliers.

[x,fval,exitflag,output,lambda] = linprog(f,A,b,Aeq,beq,lb);
Optimal solution found.

Examine the solution and Lagrange multipliers.

x,lambda.ineqlin,lambda.lower
x = 3×1

     0
    15
     3

ans = 3×1

         0
    1.5000
    0.5000

ans = 3×1

     1
     0
     0

lambda.ineqlin is nonzero for the second and third components of x. This indicates that the second and third linear inequality constraints are satisfied with equalities:

3x1+2x2+4x3=42

3x1+2x2=30.

Check that this is true:

A*x
ans = 3×1

   -12
    42
    30

lambda.lower is nonzero for the first component of x. This indicates that x(1) is at its lower bound of 0.

Input Arguments

collapse all

Coefficient vector, specified as a real vector or real array. The coefficient vector represents the objective function f'*x. The notation assumes that f is a column vector, but you can use a row vector or array. Internally, linprog converts f to the column vector f(:).

Example: f = [1,3,5,-6]

Data Types: double

Linear inequality constraints, specified as a real matrix. A is an M-by-N matrix, where M is the number of inequalities, and N is the number of variables (length of f). For large problems, pass A as a sparse matrix.

A encodes the M linear inequalities

A*x <= b,

where x is the column vector of N variables x(:), and b is a column vector with M elements.

For example, consider these inequalities:

x1 + 2x2 ≤ 10
3x1 + 4x2 ≤ 20
5x1 + 6x2 ≤ 30.

Specify the inequalities by entering the following constraints.

A = [1,2;3,4;5,6];
b = [10;20;30];

Example: To specify that the x-components add up to 1 or less, take A = ones(1,N) and b = 1.

Data Types: double

Linear equality constraints, specified as a real matrix. Aeq is an Me-by-N matrix, where Me is the number of equalities, and N is the number of variables (length of f). For large problems, pass Aeq as a sparse matrix.

Aeq encodes the Me linear equalities

Aeq*x = beq,

where x is the column vector of N variables x(:), and beq is a column vector with Me elements.

For example, consider these equalities:

x1 + 2x2 + 3x3 = 10
2x1 + 4x2 + x3 = 20.

Specify the equalities by entering the following constraints.

Aeq = [1,2,3;2,4,1];
beq = [10;20];

Example: To specify that the x-components sum to 1, take Aeq = ones(1,N) and beq = 1.

Data Types: double

Linear inequality constraints, specified as a real vector. b is an M-element vector related to the A matrix. If you pass b as a row vector, solvers internally convert b to the column vector b(:). For large problems, pass b as a sparse vector.

b encodes the M linear inequalities

A*x <= b,

where x is the column vector of N variables x(:), and A is a matrix of size M-by-N.

For example, consider these inequalities:

x1 + 2x2 ≤ 10
3x1 + 4x2 ≤ 20
5x1 + 6x2 ≤ 30.

Specify the inequalities by entering the following constraints.

A = [1,2;3,4;5,6];
b = [10;20;30];

Example: To specify that the x components sum to 1 or less, use A = ones(1,N) and b = 1.

Data Types: single | double

Linear equality constraints, specified as a real vector. beq is an Me-element vector related to the Aeq matrix. If you pass beq as a row vector, solvers internally convert beq to the column vector beq(:). For large problems, pass beq as a sparse vector.

beq encodes the Me linear equalities

Aeq*x = beq,

where x is the column vector of N variables x(:), and Aeq is a matrix of size Me-by-N.

For example, consider these equalities:

x1 + 2x2 + 3x3 = 10
2x1 + 4x2 + x3 = 20.

Specify the equalities by entering the following constraints.

Aeq = [1,2,3;2,4,1];
beq = [10;20];

Example: To specify that the x components sum to 1, use Aeq = ones(1,N) and beq = 1.

Data Types: single | double

Lower bounds, specified as a real vector or real array. If the length of f is equal to the length of lb, then lb specifies that

x(i) >= lb(i) for all i.

If numel(lb) < numel(f), then lb specifies that

x(i) >= lb(i) for 1 <= i <= numel(lb).

In this case, solvers issue a warning.

Example: To specify that all x-components are positive, use lb = zeros(size(f)).

Data Types: double

Upper bounds, specified as a real vector or real array. If the length of f is equal to the length of ub, then ub specifies that

x(i) <= ub(i) for all i.

If numel(ub) < numel(f), then ub specifies that

x(i) <= ub(i) for 1 <= i <= numel(ub).

In this case, solvers issue a warning.

Example: To specify that all x-components are less than 1, use ub = ones(size(f)).

Data Types: double

Optimization options, specified as the output of optimoptions or a structure as optimset returns.

Some options apply to all algorithms, and others are relevant for particular algorithms. See Optimization Options Reference for detailed information.

Some options are absent from the optimoptions display. These options appear in italics in the following table. For details, see View Optimization Options.

All Algorithms
Algorithm

Choose the optimization algorithm:

  • 'dual-simplex-highs' (default)

  • 'dual-simplex-legacy'

  • 'interior-point'

  • 'interior-point-legacy'

For information on choosing the algorithm, see Linear Programming Algorithms.

Diagnostics

Display diagnostic information about the function to be minimized or solved. Choose 'off' (default) or 'on'.

Display

Level of display (see Iterative Display):

  • 'final' (default) displays just the final output.

  • 'off' or 'none' displays no output.

  • 'iter' displays output at each iteration.

MaxIterations

Maximum number of iterations allowed, a nonnegative integer. The default is:

  • intmax, which is approximately 2.1475e+09, for the 'dual-simplex-highs' algorithm

  • 10*(numberOfEqualities + numberOfInequalities + numberOfVariables) for the 'dual-simplex-legacy' algorithm

  • 200 for the 'interior-point' algorithm

  • 85 for the 'interior-point-legacy' algorithm

See Tolerances and Stopping Criteria and Iterations and Function Counts.

For optimset, the name is MaxIter. See Current and Legacy Option Names.

OptimalityTolerance

Termination tolerance on the dual feasibility, a nonnegative scalar. The default is:

  • 1e-7 for the 'dual-simplex-highs' and 'dual-simplex-legacy' algorithms

  • 1e-6 for the 'interior-point' algorithm

  • 1e-8 for the 'interior-point-legacy' algorithm

For optimset, the name is TolFun. See Current and Legacy Option Names.

Dual-Simplex-Highs Algorithm
ConstraintTolerance

Feasibility tolerance for constraints, a nonnegative scalar. ConstraintTolerance measures primal feasibility tolerance. The default is 1e-7.

MaxTime

Maximum amount of time in seconds that the algorithm runs. The default is Inf.

Presolve

Level of LP presolving prior to dual simplex algorithm iterations. Specify 'auto' (default), 'off', or 'on'.

Dual-Simplex-Legacy Algorithm
ConstraintTolerance

Feasibility tolerance for constraints, a nonnegative scalar. ConstraintTolerance measures primal feasibility tolerance. The default is 1e-4.

For optimset, the name is TolCon. See Current and Legacy Option Names.

MaxTime

Maximum amount of time in seconds that the algorithm runs. The default is Inf.

Preprocess

Level of LP preprocessing prior to dual simplex algorithm iterations. Specify 'basic' (default) or 'none'.

Interior-Point Algorithm
ConstraintTolerance

Feasibility tolerance for constraints, a nonnegative scalar. ConstraintTolerance measures primal feasibility tolerance. The default is 1e-6.

For optimset, the name is TolCon. See Current and Legacy Option Names.

Example: options = optimoptions('linprog','Algorithm','interior-point','Display','iter')

Problem structure, specified as a structure with the following fields.

Field NameEntry

f

Linear objective function vector f

Aineq

Matrix for linear inequality constraints

bineq

Vector for linear inequality constraints

Aeq

Matrix for linear equality constraints

beq

Vector for linear equality constraints
lbVector of lower bounds
ubVector of upper bounds

solver

'linprog'

options

Options created with optimoptions

You must supply at least the solver field in the problem structure.

Data Types: struct

Output Arguments

collapse all

Solution, returned as a real vector or real array. The size of x is the same as the size of f.

Objective function value at the solution, returned as a real number. Generally, fval = f'*x.

Reason linprog stopped, returned as an integer.

3

The solution is feasible with respect to the relative ConstraintTolerance tolerance, but is not feasible with respect to the absolute tolerance.

1

Function converged to a solution x.

0

Number of iterations exceeded options.MaxIterations or solution time in seconds exceeded options.MaxTime.

-2

No feasible point was found.

-3

Problem is unbounded.

-4

NaN value was encountered during execution of the algorithm.

-5

Both primal and dual problems are infeasible.

-7

Search direction became too small. No further progress could be made.

-9

Solver lost feasibility.

Exitflags 3 and -9 relate to solutions that have large infeasibilities. These usually arise from linear constraint matrices that have large condition number, or problems that have large solution components. To correct these issues, try to scale the coefficient matrices, eliminate redundant linear constraints, or give tighter bounds on the variables.

Information about the optimization process, returned as a structure with these fields.

iterations

Number of iterations

algorithm

Optimization algorithm used

cgiterations

0 (interior-point algorithm only, included for backward compatibility)

message

Exit message

constrviolation

Maximum of constraint functions

firstorderopt

First-order optimality measure

Lagrange multipliers at the solution, returned as a structure with these fields.

lower

Lower bounds corresponding to lb

upper

Upper bounds corresponding to ub

ineqlin

Linear inequalities corresponding to A and b

eqlin

Linear equalities corresponding to Aeq and beq

The Lagrange multipliers for linear constraints satisfy this equation with length(f) components:

f+ATλineqlin+AeqTλeqlin+λupperλlower=0,

based on the Lagrangian

fTx+λineqlinT(Axb)+λeqlinT(Aeqxbeq)+λupperT(xub)+λlowerT(lbx).

This sign convention matches that of nonlinear solvers (see Constrained Optimality Theory). However, this sign is the opposite of the sign in much linear programming literature, so a linprog Lagrange multiplier is the negative of the associated "shadow price."

More About

collapse all

Enhanced Exit Messages

The next few items list the possible enhanced exit messages from linprog. Enhanced exit messages give a link for more information as the first sentence of the message.

Minimum Found That Satisfies The Constraints

The solver found a minimizing point that satisfies all bounds and linear constraints. Since the problem is convex, the minimizing point is a global minimum. For more information, see Local vs. Global Optima.

Solution Found During Presolve

The solver found the solution during the presolve phase. This means the bounds, linear constraints, and f (linear objective coefficient) immediately lead to a solution. For more information, see Presolve/Postsolve.

Exceeded the iteration limit

linprog exceeded the iteration limit. linprog uses the MaxIterations option (a tolerance) as a stopping criterion.

You can change the value of an option using dot notation:

options.MaxIterations = 5e4;

You can also change the value using optimoptions:

options = optimoptions(options,'MaxIterations',5e4);

Exceeded the time limit

linprog uses the MaxTime option (a tolerance) as a stopping criterion.

You can change the value of an option using dot notation:

options.MaxTime = 5e4;

You can also change the value using optimoptions:

options = optimoptions(options,'MaxTime',5e4);

Problem is unbounded

linprog stopped because there is no solution to the linear programming problem. For any target value there are feasible points with objective value smaller than the target.

Exceeded its allocated memory

linprog ran out of memory. It is possible that reformulating the problem could lead to a solution; see Williams [1].

No feasible solution found

linprog stopped because it did not find a feasible point. The problem could be infeasible. To proceed, check your constraints, or try a different linprog algorithm. For help examining the inconsistent linear constraints, see Investigate Linear Infeasibilities.

The Problem Is Infeasible

During presolve, the solver found that the problem has an inconsistent formulation. Inconsistent means not all constraints can be satisfied at a single point x. For more information, see Presolve/Postsolve.

The Problem Is Unbounded

During presolve, the solver found a feasible direction where the objective function decreases without bound. For more information, see Presolve/Postsolve.

Problem Appears Unbounded

linprog stopped because there is no solution to the linear programming problem. For any target value, it appears that there are feasible points with objective value smaller than the target.

Definitions for Exit Messages

The next few items contain definitions for terms in the linprog exit messages.

tolerance

Generally, a tolerance is a threshold which, if crossed, stops the iterations of a solver. For more information on tolerances, see Tolerances and Stopping Criteria.

Run a Different Algorithm

Your options have LargeScale equals 'off'. linprog now ignores the LargeScale option.

Previously, setting LargeScale to 'off' caused linprog to use a medium-scale algorithm.

To avoid warnings and errors:

  • Do not use the LargeScale option.

  • Use the Algorithm option to select the algorithm.

Tip

Set options using optimoptions to avoid warnings.

Choose a Different Algorithm

The linprog 'active-set' and 'simplex' algorithms have been removed. Your current option settings specify one of these algorithms. To avoid a warning, and to ensure that your code will run without error, set your 'Algorithm' option to the default 'dual-simplex-highs' algorithm, or to one of the 'interior-point', 'interior-point-legacy', or 'dual-simplex-legacy' algorithms. For details on choosing an algorithm, see Linear Programming Algorithms.

Algorithms

collapse all

Dual-Simplex-Highs Algorithm

For a description, see Dual-Simplex-Highs Algorithm.

Dual-Simplex-Legacy Algorithm

For a description, see Dual-Simplex-Legacy Algorithm.

Interior-Point-Legacy Algorithm

The 'interior-point-legacy' method is based on LIPSOL (Linear Interior Point Solver, [3]), which is a variant of Mehrotra's predictor-corrector algorithm [2], a primal-dual interior-point method. A number of preprocessing steps occur before the algorithm begins to iterate. See Interior-Point-Legacy Linear Programming.

The first stage of the algorithm might involve some preprocessing of the constraints (see Interior-Point-Legacy Linear Programming). Several conditions might cause linprog to exit with an infeasibility message. In each case, linprog returns a negative exitflag, indicating to indicate failure.

  • If a row of all zeros is detected in Aeq, but the corresponding element of beq is not zero, then the exit message is

    Exiting due to infeasibility: An all-zero row in the
    constraint matrix does not have a zero in corresponding
    right-hand-side entry.
  • If one of the elements of x is found not to be bounded below, then the exit message is

    Exiting due to infeasibility: Objective f'*x is unbounded below.
  • If one of the rows of Aeq has only one nonzero element, then the associated value in x is called a singleton variable. In this case, the value of that component of x can be computed from Aeq and beq. If the value computed violates another constraint, then the exit message is

    Exiting due to infeasibility: Singleton variables in
    equality constraints are not feasible.
  • If the singleton variable can be solved for, but the solution violates the upper or lower bounds, then the exit message is

    Exiting due to infeasibility: Singleton variables in
    the equality constraints are not within bounds.

Note

The preprocessing steps are cumulative. For example, even if your constraint matrix does not have a row of all zeros to begin with, other preprocessing steps can cause such a row to occur.

When the preprocessing finishes, the iterative part of the algorithm begins until the stopping criteria are met. (For more information about residuals, the primal problem, the dual problem, and the related stopping criteria, see Interior-Point-Legacy Linear Programming.) If the residuals are growing instead of getting smaller, or the residuals are neither growing nor shrinking, one of the two following termination messages is displayed, respectively,

One or more of the residuals, duality gap, or total relative error 
has grown 100000 times greater than its minimum value so far:

or

One or more of the residuals, duality gap, or total relative error 
has stalled:

After one of these messages is displayed, it is followed by one of the following messages indicating that the dual, the primal, or both appear to be infeasible.

  • The dual appears to be infeasible (and the primal unbounded). (The primal residual < OptimalityTolerance.)

  • The primal appears to be infeasible (and the dual unbounded). (The dual residual < OptimalityTolerance.)

  • The dual appears to be infeasible (and the primal unbounded) since the dual residual > sqrt(OptimalityTolerance). (The primal residual < 10*OptimalityTolerance.)

  • The primal appears to be infeasible (and the dual unbounded) since the primal residual > sqrt(OptimalityTolerance). (The dual residual < 10*OptimalityTolerance.)

  • The dual appears to be infeasible and the primal unbounded since the primal objective < -1e+10 and the dual objective < 1e+6.

  • The primal appears to be infeasible and the dual unbounded since the dual objective > 1e+10 and the primal objective > -1e+6.

  • Both the primal and the dual appear to be infeasible.

For example, the primal (objective) can be unbounded and the primal residual, which is a measure of primal constraint satisfaction, can be small.

Interior-Point Algorithm

The 'interior-point' algorithm is similar to 'interior-point-legacy', but with a more efficient factorization routine, and with different preprocessing. See Interior-Point linprog Algorithm.

Alternative Functionality

App

The Optimize Live Editor task provides a visual interface for linprog.

References

[1] Dantzig, G.B., A. Orden, and P. Wolfe. “Generalized Simplex Method for Minimizing a Linear Form Under Linear Inequality Restraints.” Pacific Journal Math., Vol. 5, 1955, pp. 183–195.

[2] Mehrotra, S. “On the Implementation of a Primal-Dual Interior Point Method.” SIAM Journal on Optimization, Vol. 2, 1992, pp. 575–601.

[3] Zhang, Y., “Solving Large-Scale Linear Programs by Interior-Point Methods Under the MATLAB Environment.” Technical Report TR96-01, Department of Mathematics and Statistics, University of Maryland, Baltimore County, Baltimore, MD, July 1995.

Version History

Introduced before R2006a

expand all