# lsqnonneg

Solve nonnegative linear least-squares problem

## Syntax

``x = lsqnonneg(C,d)``
``x = lsqnonneg(C,d,options)``
``x = lsqnonneg(problem)``
``````[x,resnorm,residual] = lsqnonneg(___)``````
``````[x,resnorm,residual,exitflag,output] = lsqnonneg(___)``````
``````[x,resnorm,residual,exitflag,output,lambda] = lsqnonneg(___)``````

## Description

Solve nonnegative least-squares curve fitting problems of the form

Note

`lsqnonneg` applies only to the solver-based approach. For a discussion of the two optimization approaches, see First Choose Problem-Based or Solver-Based Approach.

example

````x = lsqnonneg(C,d)` returns the vector `x` that minimizes `norm(C*x-d)` subject to `x ≥ 0`. Arguments `C` and `d` must be real.```

example

````x = lsqnonneg(C,d,options)` minimizes with the optimization options specified in the structure `options`. Use `optimset` to set these options.```
````x = lsqnonneg(problem)` finds the minimum for `problem`, a structure described in `problem`.```

example

``````[x,resnorm,residual] = lsqnonneg(___)```, for any previous syntax, additionally returns the value of the squared 2-norm of the residual, `norm(C*x-d)^2`, and returns the residual `d-C*x`.```
``````[x,resnorm,residual,exitflag,output] = lsqnonneg(___)``` additionally returns a value `exitflag` that describes the exit condition of `lsqnonneg`, and a structure `output` with information about the optimization process.```

example

``````[x,resnorm,residual,exitflag,output,lambda] = lsqnonneg(___)``` additionally returns the Lagrange multiplier vector `lambda`.```

## Examples

collapse all

Compute a nonnegative solution to a linear least-squares problem, and compare the result to the solution of an unconstrained problem.

Prepare a `C` matrix and `d` vector for the problem $\mathrm{min}||Cx-d||$.

```C = [0.0372 0.2869 0.6861 0.7071 0.6233 0.6245 0.6344 0.6170]; d = [0.8587 0.1781 0.0747 0.8405];```

Compute the constrained and unconstrained solutions.

`x = lsqnonneg(C,d)`
```x = 2×1 0 0.6929 ```
`xunc = C\d`
```xunc = 2×1 -2.5627 3.1108 ```

All entries in `x` are nonnegative, but some entries in `xunc` are negative.

Compute the norms of the residuals for the two solutions.

`constrained_norm = norm(C*x - d)`
```constrained_norm = 0.9118 ```
`unconstrained_norm = norm(C*xunc - d)`
```unconstrained_norm = 0.6674 ```

The unconstrained solution has a smaller residual norm because constraints can only increase a residual norm.

Set the `Display` option to `'final'` to see output when `lsqnonneg` finishes.

Create the options.

`options = optimset('Display','final');`

Prepare a `C` matrix and `d` vector for the problem $\mathrm{min}||Cx-d||$.

```C = [0.0372 0.2869 0.6861 0.7071 0.6233 0.6245 0.6344 0.6170]; d = [0.8587 0.1781 0.0747 0.8405];```

Call `lsqnonneg` with the options structure.

`x = lsqnonneg(C,d,options);`
```Optimization terminated. ```

Call `lsqnonneg` with outputs to obtain the solution, residual norm, and residual vector.

Prepare a `C` matrix and `d` vector for the problem $\mathrm{min}||Cx-d||$.

```C = [0.0372 0.2869 0.6861 0.7071 0.6233 0.6245 0.6344 0.6170]; d = [0.8587 0.1781 0.0747 0.8405];```

Obtain the solution and residual information.

` [x,resnorm,residual] = lsqnonneg(C,d)`
```x = 2×1 0 0.6929 ```
```resnorm = 0.8315 ```
```residual = 4×1 0.6599 -0.3119 -0.3580 0.4130 ```

Verify that the returned residual norm is the square of the norm of the returned residual vector.

` norm(residual)^2`
```ans = 0.8315 ```

Request all output arguments to examine the solution and solution process after `lsqnonneg` finishes.

Prepare a `C` matrix and `d` vector for the problem $\mathrm{min}||Cx-d||$.

```C = [0.0372 0.2869 0.6861 0.7071 0.6233 0.6245 0.6344 0.6170]; d = [0.8587 0.1781 0.0747 0.8405];```

Solve the problem, requesting all output arguments.

`[x,resnorm,residual,exitflag,output,lambda] = lsqnonneg(C,d)`
```x = 2×1 0 0.6929 ```
```resnorm = 0.8315 ```
```residual = 4×1 0.6599 -0.3119 -0.3580 0.4130 ```
```exitflag = 1 ```
```output = struct with fields: iterations: 1 algorithm: 'active-set' message: 'Optimization terminated.' ```
```lambda = 2×1 -0.1506 -0.0000 ```

`exitflag` is `1`, indicating a correct solution.

`x(1) = 0`, and the corresponding `lambda(1)` $\ne$ `0`, showing the correct duality. Similarly, `x(2) > 0`, and the corresponding `lambda(2) = 0`.

## Input Arguments

collapse all

Linear multiplier, specified as a real matrix. Represents the variable C in the problem

`$\underset{x}{\mathrm{min}}{‖Cx-d‖}_{2}^{2}.$`

For compatibility, the number of rows of `C` must equal the length of `d`.

Example: `C = [1,2;3,-1;-4,4]`

Data Types: `double`

Additive term, specified as a real vector. Represents the variable d in the problem

`$\underset{x}{\mathrm{min}}{‖Cx-d‖}_{2}^{2}.$`

For compatibility, the length of `d` must equal the number of rows of `C`.

Example: `d = [1;-6;5]`

Data Types: `double`

Optimization options, specified as a structure such as `optimset` returns. You can use `optimset` to set or change the values of these fields in the options structure. See Optimization Options Reference for detailed information.

 `Display` Level of display:`'notify'` (default) displays output only if the function does not converge.`'off'` or `'none'` displays no output.`'final'` displays just the final output. `TolX` Termination tolerance on `x`, a positive scalar. The default is `10*eps*norm(C,1)*length(C)`. See Tolerances and Stopping Criteria.

Example: `options = optimset('Display','final')`

Data Types: `struct`

Problem structure, specified as a structure with the following fields.

Field NameEntry

`C`

Real matrix

`d`

Real vector

`solver`

`'lsqnonneg'`

`options`

Options structure such as returned by `optimset`

Data Types: `struct`

## Output Arguments

collapse all

Solution, returned as a real vector. The length of `x` is the same as the length of `d`.

Squared residual norm, returned as a nonnegative scalar. Equal to `norm(C*x-d)^2`.

Residual, returned as a real vector. The residual is `d - C*x`.

Reason `lsqnonneg` stopped, returned as an integer.

 `1` Function converged to a solution `x`. `0` Number of iterations exceeded `options.MaxIter`.

Information about the optimization process, returned as a structure with fields:

 `iterations` Number of iterations taken `algorithm` `'active-set'` `message` Exit message

Lagrange multipliers, returned as a real vector. The entries satisfy the complementarity condition `x'*lambda = 0`. This means `lambda(i) < 0` when `x(i)` is approximately `0`, and `lambda(i)` is approximately `0` when `x(i) > 0`.

## Tips

• For problems where `d` has length over 20, `lsqlin` might be faster than `lsqnonneg`. When `d` has length under 20, `lsqnonneg` is generally more efficient.

To convert between the solvers when `C` has more rows than columns (meaning the system is overdetermined),

`[x,resnorm,residual,exitflag,output,lambda] = lsqnonneg(C,d)`

is equivalent to

```[m,n] = size(C); [x,resnorm,residual,exitflag,output,lambda_lsqlin] = ... lsqlin(C,d,-eye(n,n),zeros(n,1));```

The only difference is that the corresponding Lagrange multipliers have opposite signs: `lambda = -lambda_lsqlin.ineqlin`.

## Algorithms

`lsqnonneg` uses the algorithm described in [1]. The algorithm starts with a set of possible basis vectors and computes the associated dual vector `lambda`. It then selects the basis vector corresponding to the maximum value in `lambda` to swap it out of the basis in exchange for another possible candidate. This continues until `lambda ≤ 0`.

## Alternative Functionality

### App

The Optimize Live Editor task provides a visual interface for `lsqnonneg`.

## References

[1] Lawson, C. L. and R. J. Hanson. Solving Least-Squares Problems. Upper Saddle River, NJ: Prentice Hall. 1974. Chapter 23, p. 161.

## Version History

Introduced before R2006a