Is it better to trick Matlab into doing constrained optimisation.
Show older comments
Assume that I want to do some constrained optimisation. I am using fmincon with the SQP algorithm, but I am hoping to get intuition which is not algorithm specific.
What I'm wondering is whether it is generally best to use the constraints in the optimiser or "tricks" to obtain the same. Let me show an example of what I mean with a trick. Say I want x_1+x_2 = 1 always, then I can simply pass only x_1 and at the beginning of the function set x_2=1-x_1. Another example is, if I want x_3 => 0, I can simply pass y= log(x) and start my function with exponentiation.
The more I think about it, the more cases it seems one can handle with doing tricks like this, so what I'm curious to know is which strategy is generally better? Is it ever better to pass equality, bounds or inequality constraints instead of tricking Matlab to always have it satisfied like exemplified above?
Accepted Answer
More Answers (1)
Say I want x_1+x_2 = 1 always, then I can simply pass only x_1 and at the beginning of the function set x_2=1-x_1.
I would argue that this example is better, because you both eliminate a constraint (simplifying the optimization) and reduce the dimension of the problem. Reducing dimension usually speeds up convergence.
The other example y=log(x), I did not understand. It's not a positivity preserving transformation in any way that I can see. You could make a transformation x_3=y^2. One danger in this is that the derivative with respect to y is always zero at y=0, even though this may not be the location of a local minimum in x_3. A derivative-based optimizer could therefore get stuck there if you happen to initialize at y=0.
2 Comments
Henrik Dam
on 22 Jun 2015
Edited: Henrik Dam
on 22 Jun 2015
Exponentiating the resulting y yields a valid positive solution for x. Voila, no constraints
This is essentially the same as what I was saying, but I proposed a different positivity transform x=y^2 whereas you proposed x=exp(y). Both transforms force x to take on non-negative values while y is allowed to vary over the entire real line. Note, however, that an exponential transformation doesn't give a problem entirely equivalent to the constraints implemented by MATLAB solvers. MATLAB solvers implement closed constraints x>=0 whereas x=exp(y) only allows x to range over x>0. If the minimum of f(x) lies at x=0, your exponential transformation will force y to go to -Inf and you will get numerical problems.
As for what Walter was saying, I think it applies to a different issue than what you are talking about. He is talking about the difference between the least squares problem that minimizes norm(y-g(x))^2 and
min. norm(log(y) - log(g(x))^2
This is not a transformation of the domain of x like you are talking about.
Regarding the dimension reduction, is there any reason for Matlab to not simply do this it self? I mean, I can make this trick for any linear equality where variables can be reduces
For situations as simple as a single equality constraint, the savings are too negligible to make it worthwhile. You can generalize the dimension reduction to the case of N equalities, but the benefits are not always clear. You basically have to find an NxN singular submatrix of Aeq in the equations Aeq*x=beq. The computational effort to do so for large N may or may not be prohibitive, and that can be difficult for MATLAB to determine automatically.
Categories
Find more on Solver Outputs and Iterative Display in Help Center and File Exchange
Community Treasure Hunt
Find the treasures in MATLAB Central and discover how the community can help you!
Start Hunting!