# Nonnegative Linear Least Squares, Solver-Based

This example shows how to use several algorithms to solve a linear least-squares problem with the bound constraint that the solution is nonnegative. A linear least-squares problem has the form

$$\underset{x}{\mathrm{min}}\Vert Cx-d{\Vert}^{2}$$.

In this case, constrain the solution to be nonnegative, $$x\ge 0$$.

To begin, load the arrays $$C$$ and $$d$$ into your workspace.

`load particle`

View the size of each array.

sizec = size(C)

`sizec = `*1×2*
2000 400

sized = size(d)

`sized = `*1×2*
2000 1

The `C`

matrix has 2000 rows and 400 columns. Therefore, to have the correct size for matrix multiplication, the `x`

vector has 400 rows. To represent the nonnegativity constraint, set lower bounds of zero on all variables.

lb = zeros(size(C,2),1);

Solve the problem using `lsqlin`

.

```
[x,resnorm,residual,exitflag,output] = ...
lsqlin(C,d,[],[],[],[],lb);
```

Minimum found that satisfies the constraints. Optimization completed because the objective function is non-decreasing in feasible directions, to within the value of the optimality tolerance, and constraints are satisfied to within the value of the constraint tolerance.

To see details of the optimization process, examine the output structure.

disp(output)

message: 'Minimum found that satisfies the constraints....' algorithm: 'interior-point' firstorderopt: 3.6717e-06 constrviolation: 0 iterations: 8 linearsolver: 'sparse' cgiterations: []

The output structure shows that `lsqlin`

uses a sparse internal linear solver for the interior-point algorithm and takes 8 iterations to reach a first-order optimality measure of about 3.7e-6.

### Change Algorithm

The trust-region-reflective algorithm handles bound-constrained problems. See how well it performs on this problem.

options = optimoptions('lsqlin','Algorithm','trust-region-reflective'); [x2,resnorm2,residual2,exitflag2,output2] = ... lsqlin(C,d,[],[],[],[],lb,[],[],options);

Local minimum possible. lsqlin stopped because the relative change in function value is less than the square root of the function tolerance and the rate of change in the function value is slow.

disp(output2)

iterations: 10 algorithm: 'trust-region-reflective' firstorderopt: 2.7870e-05 cgiterations: 42 constrviolation: [] linearsolver: [] message: 'Local minimum possible....'

This time, the solver takes more iterations and reaches a solution with a higher (worse) first-order optimality measure.

To improve the first-order optimality measure, try setting the `SubproblemAlgorithm`

option to `'factorization'`

.

options.SubproblemAlgorithm = 'factorization'; [x3,resnorm3,residual3,exitflag3,output3] = ... lsqlin(C,d,[],[],[],[],lb,[],[],options);

Optimal solution found.

disp(output3)

iterations: 12 algorithm: 'trust-region-reflective' firstorderopt: 5.5907e-15 cgiterations: 0 constrviolation: [] linearsolver: [] message: 'Optimal solution found.'

Using this option brings the first-order optimality measure nearly to zero, which is the best possible result.

### Change Solver

Try solving t problem using the `lsqnonneg`

solver, which is designed to handle nonnegative linear least squares.

[x4,resnorm4,residual4,exitflag4,output4] = lsqnonneg(C,d); disp(output4)

iterations: 184 algorithm: 'active-set' message: 'Optimization terminated.'

`lsqnonneg`

does not report a first-order optimality measure. Instead, investigate the residual norms. To see the lower-significance digits, subtract 22.5794 from each residual norm.

t = table(resnorm - 22.5794, resnorm2 - 22.5794, resnorm3 - 22.5794, resnorm4 - 22.5794,... 'VariableNames',{'default','trust-region-reflective','factorization','lsqnonneg'})

`t=`*1×4 table*
default trust-region-reflective factorization lsqnonneg
__________ _______________________ _____________ __________
7.5411e-05 4.9186e-05 4.9179e-05 4.9179e-05

The default `lsqlin`

algorithm has a higher residual norm than the `trust-region-reflective`

algorithm. The `factorization`

and `lsqnonneg`

residual norms are even lower, and are the same at this level of display precision. See which one is lower.

disp(resnorm3 - resnorm4)

6.8212e-13

The `lsqnonneg`

residual norm is the lowest by a negligible amount. However, `lsqnonneg`

takes the most iterations to converge.